00:00:00.001 Started by upstream project "autotest-per-patch" build number 132344 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.114 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.115 The recommended git tool is: git 00:00:00.115 using credential 00000000-0000-0000-0000-000000000002 00:00:00.117 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.173 Fetching changes from the remote Git repository 00:00:00.175 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.225 Using shallow fetch with depth 1 00:00:00.225 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.225 > git --version # timeout=10 00:00:00.262 > git --version # 'git version 2.39.2' 00:00:00.262 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.292 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.292 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.214 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.224 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.235 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.235 > git config core.sparsecheckout # timeout=10 00:00:05.245 > git read-tree -mu HEAD # timeout=10 00:00:05.259 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.278 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.279 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.398 [Pipeline] Start of Pipeline 00:00:05.413 [Pipeline] library 00:00:05.414 Loading library shm_lib@master 00:00:05.414 Library shm_lib@master is cached. Copying from home. 00:00:05.431 [Pipeline] node 00:00:05.438 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.439 [Pipeline] { 00:00:05.446 [Pipeline] catchError 00:00:05.447 [Pipeline] { 00:00:05.455 [Pipeline] wrap 00:00:05.461 [Pipeline] { 00:00:05.466 [Pipeline] stage 00:00:05.468 [Pipeline] { (Prologue) 00:00:05.726 [Pipeline] sh 00:00:06.014 + logger -p user.info -t JENKINS-CI 00:00:06.034 [Pipeline] echo 00:00:06.036 Node: CYP12 00:00:06.044 [Pipeline] sh 00:00:06.349 [Pipeline] setCustomBuildProperty 00:00:06.358 [Pipeline] echo 00:00:06.360 Cleanup processes 00:00:06.363 [Pipeline] sh 00:00:06.651 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.651 938061 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.662 [Pipeline] sh 00:00:06.946 ++ grep -v 'sudo pgrep' 00:00:06.946 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.946 ++ awk '{print $1}' 00:00:06.946 + sudo kill -9 00:00:06.946 + true 00:00:06.961 [Pipeline] cleanWs 00:00:06.970 [WS-CLEANUP] Deleting project workspace... 00:00:06.970 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.977 [WS-CLEANUP] done 00:00:06.980 [Pipeline] setCustomBuildProperty 00:00:06.990 [Pipeline] sh 00:00:07.271 + sudo git config --global --replace-all safe.directory '*' 00:00:07.350 [Pipeline] httpRequest 00:00:07.684 [Pipeline] echo 00:00:07.686 Sorcerer 10.211.164.20 is alive 00:00:07.696 [Pipeline] retry 00:00:07.698 [Pipeline] { 00:00:07.711 [Pipeline] httpRequest 00:00:07.715 HttpMethod: GET 00:00:07.716 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.716 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.739 Response Code: HTTP/1.1 200 OK 00:00:07.739 Success: Status code 200 is in the accepted range: 200,404 00:00:07.740 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.960 [Pipeline] } 00:00:21.976 [Pipeline] // retry 00:00:21.983 [Pipeline] sh 00:00:22.272 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.288 [Pipeline] httpRequest 00:00:22.669 [Pipeline] echo 00:00:22.671 Sorcerer 10.211.164.20 is alive 00:00:22.680 [Pipeline] retry 00:00:22.681 [Pipeline] { 00:00:22.693 [Pipeline] httpRequest 00:00:22.698 HttpMethod: GET 00:00:22.698 URL: http://10.211.164.20/packages/spdk_8ccf9ce7b931ab985c2b5c597fc3a0a768ee8048.tar.gz 00:00:22.699 Sending request to url: http://10.211.164.20/packages/spdk_8ccf9ce7b931ab985c2b5c597fc3a0a768ee8048.tar.gz 00:00:22.713 Response Code: HTTP/1.1 200 OK 00:00:22.714 Success: Status code 200 is in the accepted range: 200,404 00:00:22.714 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8ccf9ce7b931ab985c2b5c597fc3a0a768ee8048.tar.gz 00:01:00.831 [Pipeline] } 00:01:00.848 [Pipeline] // retry 00:01:00.857 [Pipeline] sh 00:01:01.151 + tar --no-same-owner -xf spdk_8ccf9ce7b931ab985c2b5c597fc3a0a768ee8048.tar.gz 00:01:03.714 [Pipeline] sh 00:01:04.002 + git -C spdk log --oneline -n5 00:01:04.002 8ccf9ce7b accel: Fix a bug that append_dif_generate_copy() did not set dif_ctx 00:01:04.002 ac2633210 accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:01:04.002 3e396d94d bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:01:04.002 ecdb65a23 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:01:04.002 6745f139b bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:01:04.013 [Pipeline] } 00:01:04.027 [Pipeline] // stage 00:01:04.036 [Pipeline] stage 00:01:04.038 [Pipeline] { (Prepare) 00:01:04.053 [Pipeline] writeFile 00:01:04.069 [Pipeline] sh 00:01:04.357 + logger -p user.info -t JENKINS-CI 00:01:04.370 [Pipeline] sh 00:01:04.658 + logger -p user.info -t JENKINS-CI 00:01:04.670 [Pipeline] sh 00:01:04.955 + cat autorun-spdk.conf 00:01:04.955 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.955 SPDK_TEST_NVMF=1 00:01:04.955 SPDK_TEST_NVME_CLI=1 00:01:04.955 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.955 SPDK_TEST_NVMF_NICS=e810 00:01:04.955 SPDK_TEST_VFIOUSER=1 00:01:04.955 SPDK_RUN_UBSAN=1 00:01:04.955 NET_TYPE=phy 00:01:04.963 RUN_NIGHTLY=0 00:01:04.967 [Pipeline] readFile 00:01:04.990 [Pipeline] withEnv 00:01:04.992 [Pipeline] { 00:01:05.005 [Pipeline] sh 00:01:05.296 + set -ex 00:01:05.297 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:05.297 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.297 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.297 ++ SPDK_TEST_NVMF=1 00:01:05.297 ++ SPDK_TEST_NVME_CLI=1 00:01:05.297 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.297 ++ SPDK_TEST_NVMF_NICS=e810 00:01:05.297 ++ SPDK_TEST_VFIOUSER=1 00:01:05.297 ++ SPDK_RUN_UBSAN=1 00:01:05.297 ++ NET_TYPE=phy 00:01:05.297 ++ RUN_NIGHTLY=0 00:01:05.297 + case $SPDK_TEST_NVMF_NICS in 00:01:05.297 + DRIVERS=ice 00:01:05.297 + [[ tcp == \r\d\m\a ]] 00:01:05.297 + [[ -n ice ]] 00:01:05.297 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:05.297 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:05.297 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:05.297 rmmod: ERROR: Module irdma is not currently loaded 00:01:05.297 rmmod: ERROR: Module i40iw is not currently loaded 00:01:05.297 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:05.297 + true 00:01:05.297 + for D in $DRIVERS 00:01:05.297 + sudo modprobe ice 00:01:05.297 + exit 0 00:01:05.308 [Pipeline] } 00:01:05.322 [Pipeline] // withEnv 00:01:05.327 [Pipeline] } 00:01:05.340 [Pipeline] // stage 00:01:05.349 [Pipeline] catchError 00:01:05.350 [Pipeline] { 00:01:05.363 [Pipeline] timeout 00:01:05.364 Timeout set to expire in 1 hr 0 min 00:01:05.365 [Pipeline] { 00:01:05.378 [Pipeline] stage 00:01:05.380 [Pipeline] { (Tests) 00:01:05.392 [Pipeline] sh 00:01:05.681 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.681 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.681 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.681 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:05.681 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.681 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.681 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:05.681 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.681 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.681 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.681 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:05.681 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.681 + source /etc/os-release 00:01:05.681 ++ NAME='Fedora Linux' 00:01:05.681 ++ VERSION='39 (Cloud Edition)' 00:01:05.681 ++ ID=fedora 00:01:05.681 ++ VERSION_ID=39 00:01:05.681 ++ VERSION_CODENAME= 00:01:05.681 ++ PLATFORM_ID=platform:f39 00:01:05.681 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:05.681 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:05.681 ++ LOGO=fedora-logo-icon 00:01:05.681 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:05.681 ++ HOME_URL=https://fedoraproject.org/ 00:01:05.681 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:05.681 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:05.681 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:05.681 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:05.681 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:05.681 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:05.681 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:05.681 ++ SUPPORT_END=2024-11-12 00:01:05.681 ++ VARIANT='Cloud Edition' 00:01:05.681 ++ VARIANT_ID=cloud 00:01:05.681 + uname -a 00:01:05.681 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:05.681 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:08.982 Hugepages 00:01:08.982 node hugesize free / total 00:01:08.982 node0 1048576kB 0 / 0 00:01:08.982 node0 2048kB 0 / 0 00:01:08.982 node1 1048576kB 0 / 0 00:01:08.982 node1 2048kB 0 / 0 00:01:08.982 00:01:08.982 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:08.982 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:08.982 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:08.982 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:08.982 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:08.982 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:08.982 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:08.982 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:08.982 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:09.244 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:09.244 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:09.244 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:09.244 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:09.244 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:09.244 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:09.244 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:09.244 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:09.244 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:09.244 + rm -f /tmp/spdk-ld-path 00:01:09.244 + source autorun-spdk.conf 00:01:09.244 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.244 ++ SPDK_TEST_NVMF=1 00:01:09.244 ++ SPDK_TEST_NVME_CLI=1 00:01:09.244 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.244 ++ SPDK_TEST_NVMF_NICS=e810 00:01:09.244 ++ SPDK_TEST_VFIOUSER=1 00:01:09.244 ++ SPDK_RUN_UBSAN=1 00:01:09.244 ++ NET_TYPE=phy 00:01:09.244 ++ RUN_NIGHTLY=0 00:01:09.244 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:09.244 + [[ -n '' ]] 00:01:09.244 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.244 + for M in /var/spdk/build-*-manifest.txt 00:01:09.244 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:09.244 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:09.244 + for M in /var/spdk/build-*-manifest.txt 00:01:09.244 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:09.244 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:09.244 + for M in /var/spdk/build-*-manifest.txt 00:01:09.244 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:09.244 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:09.244 ++ uname 00:01:09.244 + [[ Linux == \L\i\n\u\x ]] 00:01:09.244 + sudo dmesg -T 00:01:09.244 + sudo dmesg --clear 00:01:09.244 + dmesg_pid=939162 00:01:09.244 + [[ Fedora Linux == FreeBSD ]] 00:01:09.244 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.244 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.244 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:09.244 + [[ -x /usr/src/fio-static/fio ]] 00:01:09.244 + export FIO_BIN=/usr/src/fio-static/fio 00:01:09.244 + FIO_BIN=/usr/src/fio-static/fio 00:01:09.244 + sudo dmesg -Tw 00:01:09.244 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:09.244 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:09.244 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:09.244 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.244 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.244 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:09.244 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.244 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.244 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.244 07:01:43 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:09.244 07:01:43 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.244 07:01:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.244 07:01:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:09.244 07:01:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:09.244 07:01:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.244 07:01:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:09.244 07:01:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:09.244 07:01:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:09.244 07:01:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:09.244 07:01:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:09.244 07:01:43 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:09.244 07:01:43 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.505 07:01:44 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:09.505 07:01:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:09.505 07:01:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:09.505 07:01:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:09.505 07:01:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:09.505 07:01:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:09.505 07:01:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.505 07:01:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.505 07:01:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.505 07:01:44 -- paths/export.sh@5 -- $ export PATH 00:01:09.505 07:01:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.505 07:01:44 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:09.505 07:01:44 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:09.505 07:01:44 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732082504.XXXXXX 00:01:09.505 07:01:44 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732082504.5mTOtk 00:01:09.505 07:01:44 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:09.505 07:01:44 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:09.505 07:01:44 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:09.505 07:01:44 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:09.505 07:01:44 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:09.505 07:01:44 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:09.505 07:01:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:09.505 07:01:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.505 07:01:44 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:09.505 07:01:44 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:09.505 07:01:44 -- pm/common@17 -- $ local monitor 00:01:09.505 07:01:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.505 07:01:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.505 07:01:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.505 07:01:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.505 07:01:44 -- pm/common@25 -- $ sleep 1 00:01:09.505 07:01:44 -- pm/common@21 -- $ date +%s 00:01:09.505 07:01:44 -- pm/common@21 -- $ date +%s 00:01:09.505 07:01:44 -- pm/common@21 -- $ date +%s 00:01:09.505 07:01:44 -- pm/common@21 -- $ date +%s 00:01:09.505 07:01:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082504 00:01:09.505 07:01:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082504 00:01:09.505 07:01:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082504 00:01:09.505 07:01:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082504 00:01:09.506 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082504_collect-cpu-load.pm.log 00:01:09.506 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082504_collect-vmstat.pm.log 00:01:09.506 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082504_collect-cpu-temp.pm.log 00:01:09.506 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082504_collect-bmc-pm.bmc.pm.log 00:01:10.447 07:01:45 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:10.447 07:01:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:10.447 07:01:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:10.447 07:01:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.447 07:01:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:10.447 Wed Nov 20 06:01:45 AM UTC 2024 00:01:10.447 07:01:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:10.447 v25.01-pre-198-g8ccf9ce7b 00:01:10.447 07:01:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:10.447 07:01:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:10.447 07:01:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:10.447 07:01:45 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:10.447 07:01:45 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:10.447 07:01:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.447 ************************************ 00:01:10.447 START TEST ubsan 00:01:10.447 ************************************ 00:01:10.447 07:01:45 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:10.447 using ubsan 00:01:10.447 00:01:10.447 real 0m0.000s 00:01:10.447 user 0m0.000s 00:01:10.447 sys 0m0.000s 00:01:10.447 07:01:45 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:10.447 07:01:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:10.447 ************************************ 00:01:10.447 END TEST ubsan 00:01:10.447 ************************************ 00:01:10.447 07:01:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:10.447 07:01:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:10.447 07:01:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:10.447 07:01:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:10.447 07:01:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:10.447 07:01:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:10.448 07:01:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:10.448 07:01:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:10.448 07:01:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:10.708 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:10.708 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:10.969 Using 'verbs' RDMA provider 00:01:24.146 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:39.052 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:39.052 Creating mk/config.mk...done. 00:01:39.052 Creating mk/cc.flags.mk...done. 00:01:39.052 Type 'make' to build. 00:01:39.052 07:02:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:39.052 07:02:13 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:39.052 07:02:13 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:39.052 07:02:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.052 ************************************ 00:01:39.052 START TEST make 00:01:39.052 ************************************ 00:01:39.052 07:02:13 make -- common/autotest_common.sh@1127 -- $ make -j144 00:01:39.052 make[1]: Nothing to be done for 'all'. 00:01:40.432 The Meson build system 00:01:40.432 Version: 1.5.0 00:01:40.432 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:40.432 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:40.432 Build type: native build 00:01:40.432 Project name: libvfio-user 00:01:40.432 Project version: 0.0.1 00:01:40.432 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:40.432 C linker for the host machine: cc ld.bfd 2.40-14 00:01:40.432 Host machine cpu family: x86_64 00:01:40.432 Host machine cpu: x86_64 00:01:40.432 Run-time dependency threads found: YES 00:01:40.432 Library dl found: YES 00:01:40.432 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:40.432 Run-time dependency json-c found: YES 0.17 00:01:40.432 Run-time dependency cmocka found: YES 1.1.7 00:01:40.432 Program pytest-3 found: NO 00:01:40.432 Program flake8 found: NO 00:01:40.432 Program misspell-fixer found: NO 00:01:40.432 Program restructuredtext-lint found: NO 00:01:40.432 Program valgrind found: YES (/usr/bin/valgrind) 00:01:40.432 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.432 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.432 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.432 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:40.432 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:40.432 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:40.432 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:40.432 Build targets in project: 8 00:01:40.432 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:40.432 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:40.432 00:01:40.432 libvfio-user 0.0.1 00:01:40.432 00:01:40.432 User defined options 00:01:40.432 buildtype : debug 00:01:40.432 default_library: shared 00:01:40.432 libdir : /usr/local/lib 00:01:40.432 00:01:40.432 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.690 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:40.690 [1/37] Compiling C object samples/null.p/null.c.o 00:01:40.690 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:40.690 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:40.690 [4/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:40.690 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:40.690 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:40.690 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:40.690 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:40.690 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:40.690 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:40.690 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:40.690 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:40.690 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:40.690 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:40.949 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:40.949 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:40.949 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:40.949 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:40.949 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:40.949 [20/37] Compiling C object samples/server.p/server.c.o 00:01:40.949 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:40.949 [22/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:40.949 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:40.949 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:40.949 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:40.949 [26/37] Compiling C object samples/client.p/client.c.o 00:01:40.949 [27/37] Linking target samples/client 00:01:40.949 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:40.949 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:40.949 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:40.949 [31/37] Linking target test/unit_tests 00:01:41.207 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:41.207 [33/37] Linking target samples/null 00:01:41.207 [34/37] Linking target samples/server 00:01:41.207 [35/37] Linking target samples/gpio-pci-idio-16 00:01:41.207 [36/37] Linking target samples/lspci 00:01:41.207 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:41.207 INFO: autodetecting backend as ninja 00:01:41.207 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.207 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.467 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.467 ninja: no work to do. 00:01:48.058 The Meson build system 00:01:48.058 Version: 1.5.0 00:01:48.058 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:48.058 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:48.058 Build type: native build 00:01:48.058 Program cat found: YES (/usr/bin/cat) 00:01:48.058 Project name: DPDK 00:01:48.058 Project version: 24.03.0 00:01:48.058 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:48.058 C linker for the host machine: cc ld.bfd 2.40-14 00:01:48.058 Host machine cpu family: x86_64 00:01:48.058 Host machine cpu: x86_64 00:01:48.058 Message: ## Building in Developer Mode ## 00:01:48.058 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.058 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:48.058 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.058 Program python3 found: YES (/usr/bin/python3) 00:01:48.058 Program cat found: YES (/usr/bin/cat) 00:01:48.058 Compiler for C supports arguments -march=native: YES 00:01:48.058 Checking for size of "void *" : 8 00:01:48.058 Checking for size of "void *" : 8 (cached) 00:01:48.058 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:48.058 Library m found: YES 00:01:48.058 Library numa found: YES 00:01:48.058 Has header "numaif.h" : YES 00:01:48.058 Library fdt found: NO 00:01:48.058 Library execinfo found: NO 00:01:48.058 Has header "execinfo.h" : YES 00:01:48.058 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:48.058 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.058 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.058 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.058 Run-time dependency openssl found: YES 3.1.1 00:01:48.058 Run-time dependency libpcap found: YES 1.10.4 00:01:48.058 Has header "pcap.h" with dependency libpcap: YES 00:01:48.058 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.058 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.058 Compiler for C supports arguments -Wformat: YES 00:01:48.058 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.058 Compiler for C supports arguments -Wformat-security: NO 00:01:48.058 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.058 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.058 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.058 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.058 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.058 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.058 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.058 Compiler for C supports arguments -Wundef: YES 00:01:48.058 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.058 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.058 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.058 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.058 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.058 Program objdump found: YES (/usr/bin/objdump) 00:01:48.058 Compiler for C supports arguments -mavx512f: YES 00:01:48.058 Checking if "AVX512 checking" compiles: YES 00:01:48.058 Fetching value of define "__SSE4_2__" : 1 00:01:48.058 Fetching value of define "__AES__" : 1 00:01:48.058 Fetching value of define "__AVX__" : 1 00:01:48.058 Fetching value of define "__AVX2__" : 1 00:01:48.058 Fetching value of define "__AVX512BW__" : 1 00:01:48.058 Fetching value of define "__AVX512CD__" : 1 00:01:48.058 Fetching value of define "__AVX512DQ__" : 1 00:01:48.058 Fetching value of define "__AVX512F__" : 1 00:01:48.058 Fetching value of define "__AVX512VL__" : 1 00:01:48.058 Fetching value of define "__PCLMUL__" : 1 00:01:48.058 Fetching value of define "__RDRND__" : 1 00:01:48.058 Fetching value of define "__RDSEED__" : 1 00:01:48.058 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:48.058 Fetching value of define "__znver1__" : (undefined) 00:01:48.058 Fetching value of define "__znver2__" : (undefined) 00:01:48.058 Fetching value of define "__znver3__" : (undefined) 00:01:48.058 Fetching value of define "__znver4__" : (undefined) 00:01:48.058 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.058 Message: lib/log: Defining dependency "log" 00:01:48.058 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.058 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.058 Checking for function "getentropy" : NO 00:01:48.058 Message: lib/eal: Defining dependency "eal" 00:01:48.058 Message: lib/ring: Defining dependency "ring" 00:01:48.058 Message: lib/rcu: Defining dependency "rcu" 00:01:48.058 Message: lib/mempool: Defining dependency "mempool" 00:01:48.058 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.058 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.058 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.058 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.058 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.058 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:48.058 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:48.058 Compiler for C supports arguments -mpclmul: YES 00:01:48.058 Compiler for C supports arguments -maes: YES 00:01:48.058 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.058 Compiler for C supports arguments -mavx512bw: YES 00:01:48.058 Compiler for C supports arguments -mavx512dq: YES 00:01:48.058 Compiler for C supports arguments -mavx512vl: YES 00:01:48.058 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.058 Compiler for C supports arguments -mavx2: YES 00:01:48.058 Compiler for C supports arguments -mavx: YES 00:01:48.058 Message: lib/net: Defining dependency "net" 00:01:48.058 Message: lib/meter: Defining dependency "meter" 00:01:48.058 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.058 Message: lib/pci: Defining dependency "pci" 00:01:48.058 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.058 Message: lib/hash: Defining dependency "hash" 00:01:48.058 Message: lib/timer: Defining dependency "timer" 00:01:48.058 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.058 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.058 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.058 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.058 Message: lib/power: Defining dependency "power" 00:01:48.058 Message: lib/reorder: Defining dependency "reorder" 00:01:48.058 Message: lib/security: Defining dependency "security" 00:01:48.058 Has header "linux/userfaultfd.h" : YES 00:01:48.058 Has header "linux/vduse.h" : YES 00:01:48.058 Message: lib/vhost: Defining dependency "vhost" 00:01:48.058 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:48.058 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.058 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.058 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.058 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:48.058 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:48.058 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:48.058 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:48.058 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:48.058 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:48.058 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:48.058 Configuring doxy-api-html.conf using configuration 00:01:48.058 Configuring doxy-api-man.conf using configuration 00:01:48.058 Program mandb found: YES (/usr/bin/mandb) 00:01:48.059 Program sphinx-build found: NO 00:01:48.059 Configuring rte_build_config.h using configuration 00:01:48.059 Message: 00:01:48.059 ================= 00:01:48.059 Applications Enabled 00:01:48.059 ================= 00:01:48.059 00:01:48.059 apps: 00:01:48.059 00:01:48.059 00:01:48.059 Message: 00:01:48.059 ================= 00:01:48.059 Libraries Enabled 00:01:48.059 ================= 00:01:48.059 00:01:48.059 libs: 00:01:48.059 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:48.059 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:48.059 cryptodev, dmadev, power, reorder, security, vhost, 00:01:48.059 00:01:48.059 Message: 00:01:48.059 =============== 00:01:48.059 Drivers Enabled 00:01:48.059 =============== 00:01:48.059 00:01:48.059 common: 00:01:48.059 00:01:48.059 bus: 00:01:48.059 pci, vdev, 00:01:48.059 mempool: 00:01:48.059 ring, 00:01:48.059 dma: 00:01:48.059 00:01:48.059 net: 00:01:48.059 00:01:48.059 crypto: 00:01:48.059 00:01:48.059 compress: 00:01:48.059 00:01:48.059 vdpa: 00:01:48.059 00:01:48.059 00:01:48.059 Message: 00:01:48.059 ================= 00:01:48.059 Content Skipped 00:01:48.059 ================= 00:01:48.059 00:01:48.059 apps: 00:01:48.059 dumpcap: explicitly disabled via build config 00:01:48.059 graph: explicitly disabled via build config 00:01:48.059 pdump: explicitly disabled via build config 00:01:48.059 proc-info: explicitly disabled via build config 00:01:48.059 test-acl: explicitly disabled via build config 00:01:48.059 test-bbdev: explicitly disabled via build config 00:01:48.059 test-cmdline: explicitly disabled via build config 00:01:48.059 test-compress-perf: explicitly disabled via build config 00:01:48.059 test-crypto-perf: explicitly disabled via build config 00:01:48.059 test-dma-perf: explicitly disabled via build config 00:01:48.059 test-eventdev: explicitly disabled via build config 00:01:48.059 test-fib: explicitly disabled via build config 00:01:48.059 test-flow-perf: explicitly disabled via build config 00:01:48.059 test-gpudev: explicitly disabled via build config 00:01:48.059 test-mldev: explicitly disabled via build config 00:01:48.059 test-pipeline: explicitly disabled via build config 00:01:48.059 test-pmd: explicitly disabled via build config 00:01:48.059 test-regex: explicitly disabled via build config 00:01:48.059 test-sad: explicitly disabled via build config 00:01:48.059 test-security-perf: explicitly disabled via build config 00:01:48.059 00:01:48.059 libs: 00:01:48.059 argparse: explicitly disabled via build config 00:01:48.059 metrics: explicitly disabled via build config 00:01:48.059 acl: explicitly disabled via build config 00:01:48.059 bbdev: explicitly disabled via build config 00:01:48.059 bitratestats: explicitly disabled via build config 00:01:48.059 bpf: explicitly disabled via build config 00:01:48.059 cfgfile: explicitly disabled via build config 00:01:48.059 distributor: explicitly disabled via build config 00:01:48.059 efd: explicitly disabled via build config 00:01:48.059 eventdev: explicitly disabled via build config 00:01:48.059 dispatcher: explicitly disabled via build config 00:01:48.059 gpudev: explicitly disabled via build config 00:01:48.059 gro: explicitly disabled via build config 00:01:48.059 gso: explicitly disabled via build config 00:01:48.059 ip_frag: explicitly disabled via build config 00:01:48.059 jobstats: explicitly disabled via build config 00:01:48.059 latencystats: explicitly disabled via build config 00:01:48.059 lpm: explicitly disabled via build config 00:01:48.059 member: explicitly disabled via build config 00:01:48.059 pcapng: explicitly disabled via build config 00:01:48.059 rawdev: explicitly disabled via build config 00:01:48.059 regexdev: explicitly disabled via build config 00:01:48.059 mldev: explicitly disabled via build config 00:01:48.059 rib: explicitly disabled via build config 00:01:48.059 sched: explicitly disabled via build config 00:01:48.059 stack: explicitly disabled via build config 00:01:48.059 ipsec: explicitly disabled via build config 00:01:48.059 pdcp: explicitly disabled via build config 00:01:48.059 fib: explicitly disabled via build config 00:01:48.059 port: explicitly disabled via build config 00:01:48.059 pdump: explicitly disabled via build config 00:01:48.059 table: explicitly disabled via build config 00:01:48.059 pipeline: explicitly disabled via build config 00:01:48.059 graph: explicitly disabled via build config 00:01:48.059 node: explicitly disabled via build config 00:01:48.059 00:01:48.059 drivers: 00:01:48.059 common/cpt: not in enabled drivers build config 00:01:48.059 common/dpaax: not in enabled drivers build config 00:01:48.059 common/iavf: not in enabled drivers build config 00:01:48.059 common/idpf: not in enabled drivers build config 00:01:48.059 common/ionic: not in enabled drivers build config 00:01:48.059 common/mvep: not in enabled drivers build config 00:01:48.059 common/octeontx: not in enabled drivers build config 00:01:48.059 bus/auxiliary: not in enabled drivers build config 00:01:48.059 bus/cdx: not in enabled drivers build config 00:01:48.059 bus/dpaa: not in enabled drivers build config 00:01:48.059 bus/fslmc: not in enabled drivers build config 00:01:48.059 bus/ifpga: not in enabled drivers build config 00:01:48.059 bus/platform: not in enabled drivers build config 00:01:48.059 bus/uacce: not in enabled drivers build config 00:01:48.059 bus/vmbus: not in enabled drivers build config 00:01:48.059 common/cnxk: not in enabled drivers build config 00:01:48.059 common/mlx5: not in enabled drivers build config 00:01:48.059 common/nfp: not in enabled drivers build config 00:01:48.059 common/nitrox: not in enabled drivers build config 00:01:48.059 common/qat: not in enabled drivers build config 00:01:48.059 common/sfc_efx: not in enabled drivers build config 00:01:48.059 mempool/bucket: not in enabled drivers build config 00:01:48.059 mempool/cnxk: not in enabled drivers build config 00:01:48.059 mempool/dpaa: not in enabled drivers build config 00:01:48.059 mempool/dpaa2: not in enabled drivers build config 00:01:48.059 mempool/octeontx: not in enabled drivers build config 00:01:48.059 mempool/stack: not in enabled drivers build config 00:01:48.059 dma/cnxk: not in enabled drivers build config 00:01:48.059 dma/dpaa: not in enabled drivers build config 00:01:48.059 dma/dpaa2: not in enabled drivers build config 00:01:48.059 dma/hisilicon: not in enabled drivers build config 00:01:48.059 dma/idxd: not in enabled drivers build config 00:01:48.059 dma/ioat: not in enabled drivers build config 00:01:48.059 dma/skeleton: not in enabled drivers build config 00:01:48.059 net/af_packet: not in enabled drivers build config 00:01:48.059 net/af_xdp: not in enabled drivers build config 00:01:48.059 net/ark: not in enabled drivers build config 00:01:48.059 net/atlantic: not in enabled drivers build config 00:01:48.059 net/avp: not in enabled drivers build config 00:01:48.059 net/axgbe: not in enabled drivers build config 00:01:48.059 net/bnx2x: not in enabled drivers build config 00:01:48.059 net/bnxt: not in enabled drivers build config 00:01:48.059 net/bonding: not in enabled drivers build config 00:01:48.059 net/cnxk: not in enabled drivers build config 00:01:48.059 net/cpfl: not in enabled drivers build config 00:01:48.059 net/cxgbe: not in enabled drivers build config 00:01:48.059 net/dpaa: not in enabled drivers build config 00:01:48.059 net/dpaa2: not in enabled drivers build config 00:01:48.059 net/e1000: not in enabled drivers build config 00:01:48.059 net/ena: not in enabled drivers build config 00:01:48.059 net/enetc: not in enabled drivers build config 00:01:48.059 net/enetfec: not in enabled drivers build config 00:01:48.059 net/enic: not in enabled drivers build config 00:01:48.059 net/failsafe: not in enabled drivers build config 00:01:48.059 net/fm10k: not in enabled drivers build config 00:01:48.059 net/gve: not in enabled drivers build config 00:01:48.059 net/hinic: not in enabled drivers build config 00:01:48.059 net/hns3: not in enabled drivers build config 00:01:48.059 net/i40e: not in enabled drivers build config 00:01:48.059 net/iavf: not in enabled drivers build config 00:01:48.059 net/ice: not in enabled drivers build config 00:01:48.059 net/idpf: not in enabled drivers build config 00:01:48.059 net/igc: not in enabled drivers build config 00:01:48.059 net/ionic: not in enabled drivers build config 00:01:48.059 net/ipn3ke: not in enabled drivers build config 00:01:48.059 net/ixgbe: not in enabled drivers build config 00:01:48.059 net/mana: not in enabled drivers build config 00:01:48.059 net/memif: not in enabled drivers build config 00:01:48.059 net/mlx4: not in enabled drivers build config 00:01:48.059 net/mlx5: not in enabled drivers build config 00:01:48.059 net/mvneta: not in enabled drivers build config 00:01:48.059 net/mvpp2: not in enabled drivers build config 00:01:48.059 net/netvsc: not in enabled drivers build config 00:01:48.059 net/nfb: not in enabled drivers build config 00:01:48.059 net/nfp: not in enabled drivers build config 00:01:48.059 net/ngbe: not in enabled drivers build config 00:01:48.059 net/null: not in enabled drivers build config 00:01:48.059 net/octeontx: not in enabled drivers build config 00:01:48.059 net/octeon_ep: not in enabled drivers build config 00:01:48.059 net/pcap: not in enabled drivers build config 00:01:48.059 net/pfe: not in enabled drivers build config 00:01:48.059 net/qede: not in enabled drivers build config 00:01:48.059 net/ring: not in enabled drivers build config 00:01:48.059 net/sfc: not in enabled drivers build config 00:01:48.059 net/softnic: not in enabled drivers build config 00:01:48.059 net/tap: not in enabled drivers build config 00:01:48.059 net/thunderx: not in enabled drivers build config 00:01:48.059 net/txgbe: not in enabled drivers build config 00:01:48.059 net/vdev_netvsc: not in enabled drivers build config 00:01:48.059 net/vhost: not in enabled drivers build config 00:01:48.060 net/virtio: not in enabled drivers build config 00:01:48.060 net/vmxnet3: not in enabled drivers build config 00:01:48.060 raw/*: missing internal dependency, "rawdev" 00:01:48.060 crypto/armv8: not in enabled drivers build config 00:01:48.060 crypto/bcmfs: not in enabled drivers build config 00:01:48.060 crypto/caam_jr: not in enabled drivers build config 00:01:48.060 crypto/ccp: not in enabled drivers build config 00:01:48.060 crypto/cnxk: not in enabled drivers build config 00:01:48.060 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.060 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.060 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.060 crypto/mlx5: not in enabled drivers build config 00:01:48.060 crypto/mvsam: not in enabled drivers build config 00:01:48.060 crypto/nitrox: not in enabled drivers build config 00:01:48.060 crypto/null: not in enabled drivers build config 00:01:48.060 crypto/octeontx: not in enabled drivers build config 00:01:48.060 crypto/openssl: not in enabled drivers build config 00:01:48.060 crypto/scheduler: not in enabled drivers build config 00:01:48.060 crypto/uadk: not in enabled drivers build config 00:01:48.060 crypto/virtio: not in enabled drivers build config 00:01:48.060 compress/isal: not in enabled drivers build config 00:01:48.060 compress/mlx5: not in enabled drivers build config 00:01:48.060 compress/nitrox: not in enabled drivers build config 00:01:48.060 compress/octeontx: not in enabled drivers build config 00:01:48.060 compress/zlib: not in enabled drivers build config 00:01:48.060 regex/*: missing internal dependency, "regexdev" 00:01:48.060 ml/*: missing internal dependency, "mldev" 00:01:48.060 vdpa/ifc: not in enabled drivers build config 00:01:48.060 vdpa/mlx5: not in enabled drivers build config 00:01:48.060 vdpa/nfp: not in enabled drivers build config 00:01:48.060 vdpa/sfc: not in enabled drivers build config 00:01:48.060 event/*: missing internal dependency, "eventdev" 00:01:48.060 baseband/*: missing internal dependency, "bbdev" 00:01:48.060 gpu/*: missing internal dependency, "gpudev" 00:01:48.060 00:01:48.060 00:01:48.060 Build targets in project: 84 00:01:48.060 00:01:48.060 DPDK 24.03.0 00:01:48.060 00:01:48.060 User defined options 00:01:48.060 buildtype : debug 00:01:48.060 default_library : shared 00:01:48.060 libdir : lib 00:01:48.060 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:48.060 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:48.060 c_link_args : 00:01:48.060 cpu_instruction_set: native 00:01:48.060 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:48.060 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:48.060 enable_docs : false 00:01:48.060 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:48.060 enable_kmods : false 00:01:48.060 max_lcores : 128 00:01:48.060 tests : false 00:01:48.060 00:01:48.060 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:48.060 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:48.060 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:48.060 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:48.060 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:48.060 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:48.060 [5/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:48.060 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:48.060 [7/267] Linking static target lib/librte_kvargs.a 00:01:48.060 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:48.060 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:48.060 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:48.060 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:48.060 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:48.060 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:48.060 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:48.060 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:48.060 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.060 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:48.060 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:48.060 [19/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:48.060 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.060 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:48.060 [22/267] Linking static target lib/librte_log.a 00:01:48.060 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:48.319 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.319 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.319 [26/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.319 [27/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:48.319 [28/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.319 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.319 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.319 [31/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.319 [32/267] Linking static target lib/librte_pci.a 00:01:48.319 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:48.319 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:48.319 [35/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:48.319 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:48.319 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:48.319 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:48.579 [39/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.580 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.580 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:48.580 [42/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.580 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:48.580 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.580 [45/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.580 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:48.580 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.580 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:48.580 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.580 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:48.580 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:48.580 [52/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.580 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.580 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:48.580 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.580 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.580 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:48.580 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:48.580 [59/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:48.580 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.580 [61/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:48.580 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:48.580 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.580 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:48.580 [65/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.580 [66/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.580 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:48.580 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:48.580 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:48.580 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.580 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:48.580 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:48.580 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.580 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.580 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.580 [76/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.580 [77/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.580 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:48.580 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.580 [80/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.580 [81/267] Linking static target lib/librte_meter.a 00:01:48.580 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:48.580 [83/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:48.580 [84/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:48.580 [85/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.580 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.580 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.580 [88/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.580 [89/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:48.580 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.580 [91/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.580 [92/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:48.580 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.580 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.580 [95/267] Linking static target lib/librte_ring.a 00:01:48.580 [96/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:48.580 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.580 [98/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.580 [99/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:48.580 [100/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.580 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.580 [102/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.580 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.580 [104/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.580 [105/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.580 [106/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:48.580 [107/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:48.580 [108/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:48.580 [109/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.580 [110/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:48.580 [111/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:48.580 [112/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.580 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:48.580 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.580 [115/267] Linking static target lib/librte_timer.a 00:01:48.580 [116/267] Linking static target lib/librte_telemetry.a 00:01:48.580 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.580 [118/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:48.580 [119/267] Linking static target lib/librte_cmdline.a 00:01:48.580 [120/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:48.580 [121/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.580 [122/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:48.580 [123/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.580 [124/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.580 [125/267] Linking static target lib/librte_dmadev.a 00:01:48.580 [126/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.580 [127/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.580 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:48.580 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:48.580 [130/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:48.580 [131/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:48.580 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.843 [133/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:48.843 [134/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.843 [135/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.843 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:48.843 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:48.843 [138/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:48.843 [139/267] Linking static target lib/librte_net.a 00:01:48.843 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:48.843 [141/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.843 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:48.843 [143/267] Linking static target lib/librte_reorder.a 00:01:48.843 [144/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:48.843 [145/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:48.843 [146/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.843 [147/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.843 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:48.843 [149/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.843 [150/267] Linking static target lib/librte_mempool.a 00:01:48.843 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:48.843 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.843 [153/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.843 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:48.843 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.843 [156/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.843 [157/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.843 [158/267] Linking static target lib/librte_rcu.a 00:01:48.843 [159/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.843 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:48.843 [161/267] Linking static target lib/librte_power.a 00:01:48.843 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:48.843 [163/267] Linking target lib/librte_log.so.24.1 00:01:48.843 [164/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.843 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:48.843 [166/267] Linking static target lib/librte_compressdev.a 00:01:48.843 [167/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:48.843 [168/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.843 [169/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:48.843 [170/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:48.843 [171/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.843 [172/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:48.843 [173/267] Linking static target lib/librte_eal.a 00:01:48.843 [174/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:48.843 [175/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:48.843 [176/267] Linking static target drivers/librte_bus_vdev.a 00:01:48.843 [177/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.843 [178/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.843 [179/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.843 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:48.843 [181/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.843 [182/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:48.843 [183/267] Linking static target lib/librte_mbuf.a 00:01:48.843 [184/267] Linking static target lib/librte_security.a 00:01:48.843 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:48.843 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:48.843 [187/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:49.104 [188/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:49.104 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:49.104 [190/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.104 [191/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:49.104 [192/267] Linking static target lib/librte_hash.a 00:01:49.104 [193/267] Linking target lib/librte_kvargs.so.24.1 00:01:49.104 [194/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:49.104 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:49.104 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:49.104 [197/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.104 [198/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.104 [199/267] Linking static target drivers/librte_mempool_ring.a 00:01:49.104 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.104 [201/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.104 [202/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.104 [203/267] Linking static target drivers/librte_bus_pci.a 00:01:49.104 [204/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:49.104 [205/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:49.104 [206/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:49.104 [207/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.104 [208/267] Linking static target lib/librte_cryptodev.a 00:01:49.104 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.364 [210/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.364 [211/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.364 [212/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.364 [213/267] Linking target lib/librte_telemetry.so.24.1 00:01:49.364 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.625 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:49.625 [216/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.625 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.625 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:49.625 [219/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.625 [220/267] Linking static target lib/librte_ethdev.a 00:01:49.885 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.885 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.885 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.885 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.145 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.145 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.713 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.973 [228/267] Linking static target lib/librte_vhost.a 00:01:51.232 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.138 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.717 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.289 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.289 [233/267] Linking target lib/librte_eal.so.24.1 00:02:00.289 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:00.550 [235/267] Linking target lib/librte_meter.so.24.1 00:02:00.550 [236/267] Linking target lib/librte_ring.so.24.1 00:02:00.550 [237/267] Linking target lib/librte_timer.so.24.1 00:02:00.550 [238/267] Linking target lib/librte_pci.so.24.1 00:02:00.550 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:00.550 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:00.550 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:00.550 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:00.550 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:00.550 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:00.550 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:00.550 [246/267] Linking target lib/librte_mempool.so.24.1 00:02:00.550 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:00.550 [248/267] Linking target lib/librte_rcu.so.24.1 00:02:00.812 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:00.812 [250/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:00.812 [251/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:00.812 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:00.812 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:00.812 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:02:01.072 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:01.072 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:01.072 [257/267] Linking target lib/librte_net.so.24.1 00:02:01.072 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:01.073 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:01.073 [260/267] Linking target lib/librte_hash.so.24.1 00:02:01.073 [261/267] Linking target lib/librte_security.so.24.1 00:02:01.073 [262/267] Linking target lib/librte_cmdline.so.24.1 00:02:01.073 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:01.334 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:01.334 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:01.334 [266/267] Linking target lib/librte_power.so.24.1 00:02:01.334 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:01.334 INFO: autodetecting backend as ninja 00:02:01.334 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:05.538 CC lib/log/log.o 00:02:05.538 CC lib/ut/ut.o 00:02:05.538 CC lib/log/log_flags.o 00:02:05.538 CC lib/log/log_deprecated.o 00:02:05.538 CC lib/ut_mock/mock.o 00:02:05.538 LIB libspdk_log.a 00:02:05.538 LIB libspdk_ut.a 00:02:05.538 LIB libspdk_ut_mock.a 00:02:05.538 SO libspdk_ut.so.2.0 00:02:05.538 SO libspdk_log.so.7.1 00:02:05.538 SO libspdk_ut_mock.so.6.0 00:02:05.538 SYMLINK libspdk_ut.so 00:02:05.538 SYMLINK libspdk_ut_mock.so 00:02:05.538 SYMLINK libspdk_log.so 00:02:05.798 CC lib/dma/dma.o 00:02:05.798 CC lib/util/base64.o 00:02:05.798 CC lib/util/bit_array.o 00:02:05.798 CC lib/util/cpuset.o 00:02:05.798 CC lib/util/crc16.o 00:02:05.798 CC lib/util/crc32.o 00:02:05.798 CC lib/util/crc32c.o 00:02:05.798 CC lib/util/crc32_ieee.o 00:02:05.798 CXX lib/trace_parser/trace.o 00:02:05.798 CC lib/util/crc64.o 00:02:05.798 CC lib/ioat/ioat.o 00:02:05.798 CC lib/util/dif.o 00:02:05.798 CC lib/util/fd.o 00:02:05.798 CC lib/util/fd_group.o 00:02:05.798 CC lib/util/file.o 00:02:05.798 CC lib/util/hexlify.o 00:02:05.798 CC lib/util/iov.o 00:02:05.798 CC lib/util/math.o 00:02:05.798 CC lib/util/net.o 00:02:05.798 CC lib/util/pipe.o 00:02:05.798 CC lib/util/strerror_tls.o 00:02:05.798 CC lib/util/string.o 00:02:05.798 CC lib/util/uuid.o 00:02:05.798 CC lib/util/xor.o 00:02:05.798 CC lib/util/md5.o 00:02:05.798 CC lib/util/zipf.o 00:02:05.798 CC lib/vfio_user/host/vfio_user_pci.o 00:02:05.798 CC lib/vfio_user/host/vfio_user.o 00:02:06.058 LIB libspdk_dma.a 00:02:06.058 SO libspdk_dma.so.5.0 00:02:06.058 LIB libspdk_ioat.a 00:02:06.058 SYMLINK libspdk_dma.so 00:02:06.058 SO libspdk_ioat.so.7.0 00:02:06.058 LIB libspdk_util.a 00:02:06.058 LIB libspdk_vfio_user.a 00:02:06.058 SYMLINK libspdk_ioat.so 00:02:06.058 SO libspdk_vfio_user.so.5.0 00:02:06.319 SO libspdk_util.so.10.1 00:02:06.319 SYMLINK libspdk_vfio_user.so 00:02:06.319 SYMLINK libspdk_util.so 00:02:06.580 LIB libspdk_trace_parser.a 00:02:06.580 SO libspdk_trace_parser.so.6.0 00:02:06.580 CC lib/rdma_utils/rdma_utils.o 00:02:06.580 CC lib/vmd/vmd.o 00:02:06.580 CC lib/vmd/led.o 00:02:06.580 CC lib/idxd/idxd.o 00:02:06.580 CC lib/idxd/idxd_user.o 00:02:06.580 SYMLINK libspdk_trace_parser.so 00:02:06.580 CC lib/idxd/idxd_kernel.o 00:02:06.580 CC lib/conf/conf.o 00:02:06.580 CC lib/json/json_parse.o 00:02:06.580 CC lib/json/json_util.o 00:02:06.580 CC lib/env_dpdk/env.o 00:02:06.580 CC lib/json/json_write.o 00:02:06.580 CC lib/env_dpdk/memory.o 00:02:06.580 CC lib/env_dpdk/pci.o 00:02:06.580 CC lib/env_dpdk/init.o 00:02:06.580 CC lib/env_dpdk/threads.o 00:02:06.580 CC lib/env_dpdk/pci_ioat.o 00:02:06.580 CC lib/env_dpdk/pci_virtio.o 00:02:06.580 CC lib/env_dpdk/pci_vmd.o 00:02:06.842 CC lib/env_dpdk/pci_idxd.o 00:02:06.842 CC lib/env_dpdk/pci_event.o 00:02:06.842 CC lib/env_dpdk/sigbus_handler.o 00:02:06.842 CC lib/env_dpdk/pci_dpdk.o 00:02:06.842 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:06.842 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:06.842 LIB libspdk_conf.a 00:02:06.842 LIB libspdk_rdma_utils.a 00:02:06.842 SO libspdk_conf.so.6.0 00:02:07.103 SO libspdk_rdma_utils.so.1.0 00:02:07.103 LIB libspdk_json.a 00:02:07.103 SYMLINK libspdk_conf.so 00:02:07.103 SO libspdk_json.so.6.0 00:02:07.103 SYMLINK libspdk_rdma_utils.so 00:02:07.103 SYMLINK libspdk_json.so 00:02:07.103 LIB libspdk_idxd.a 00:02:07.364 SO libspdk_idxd.so.12.1 00:02:07.364 LIB libspdk_vmd.a 00:02:07.364 SO libspdk_vmd.so.6.0 00:02:07.364 SYMLINK libspdk_idxd.so 00:02:07.364 SYMLINK libspdk_vmd.so 00:02:07.364 CC lib/rdma_provider/common.o 00:02:07.364 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:07.364 CC lib/jsonrpc/jsonrpc_server.o 00:02:07.364 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:07.364 CC lib/jsonrpc/jsonrpc_client.o 00:02:07.364 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:07.624 LIB libspdk_rdma_provider.a 00:02:07.624 SO libspdk_rdma_provider.so.7.0 00:02:07.624 LIB libspdk_jsonrpc.a 00:02:07.885 SYMLINK libspdk_rdma_provider.so 00:02:07.885 SO libspdk_jsonrpc.so.6.0 00:02:07.885 SYMLINK libspdk_jsonrpc.so 00:02:07.885 LIB libspdk_env_dpdk.a 00:02:08.145 SO libspdk_env_dpdk.so.15.1 00:02:08.145 SYMLINK libspdk_env_dpdk.so 00:02:08.145 CC lib/rpc/rpc.o 00:02:08.405 LIB libspdk_rpc.a 00:02:08.405 SO libspdk_rpc.so.6.0 00:02:08.666 SYMLINK libspdk_rpc.so 00:02:08.927 CC lib/notify/notify.o 00:02:08.927 CC lib/notify/notify_rpc.o 00:02:08.927 CC lib/trace/trace.o 00:02:08.927 CC lib/trace/trace_flags.o 00:02:08.927 CC lib/trace/trace_rpc.o 00:02:08.927 CC lib/keyring/keyring.o 00:02:08.927 CC lib/keyring/keyring_rpc.o 00:02:09.188 LIB libspdk_notify.a 00:02:09.188 SO libspdk_notify.so.6.0 00:02:09.188 LIB libspdk_keyring.a 00:02:09.188 LIB libspdk_trace.a 00:02:09.188 SO libspdk_keyring.so.2.0 00:02:09.188 SYMLINK libspdk_notify.so 00:02:09.188 SO libspdk_trace.so.11.0 00:02:09.188 SYMLINK libspdk_keyring.so 00:02:09.188 SYMLINK libspdk_trace.so 00:02:09.760 CC lib/sock/sock.o 00:02:09.760 CC lib/sock/sock_rpc.o 00:02:09.760 CC lib/thread/thread.o 00:02:09.760 CC lib/thread/iobuf.o 00:02:10.021 LIB libspdk_sock.a 00:02:10.021 SO libspdk_sock.so.10.0 00:02:10.021 SYMLINK libspdk_sock.so 00:02:10.592 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:10.592 CC lib/nvme/nvme_ctrlr.o 00:02:10.592 CC lib/nvme/nvme_fabric.o 00:02:10.592 CC lib/nvme/nvme_ns_cmd.o 00:02:10.592 CC lib/nvme/nvme_ns.o 00:02:10.592 CC lib/nvme/nvme_pcie_common.o 00:02:10.592 CC lib/nvme/nvme_pcie.o 00:02:10.592 CC lib/nvme/nvme_qpair.o 00:02:10.592 CC lib/nvme/nvme_transport.o 00:02:10.592 CC lib/nvme/nvme.o 00:02:10.592 CC lib/nvme/nvme_quirks.o 00:02:10.592 CC lib/nvme/nvme_discovery.o 00:02:10.592 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:10.592 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:10.592 CC lib/nvme/nvme_tcp.o 00:02:10.592 CC lib/nvme/nvme_opal.o 00:02:10.592 CC lib/nvme/nvme_io_msg.o 00:02:10.592 CC lib/nvme/nvme_poll_group.o 00:02:10.592 CC lib/nvme/nvme_zns.o 00:02:10.592 CC lib/nvme/nvme_stubs.o 00:02:10.592 CC lib/nvme/nvme_auth.o 00:02:10.592 CC lib/nvme/nvme_cuse.o 00:02:10.592 CC lib/nvme/nvme_vfio_user.o 00:02:10.592 CC lib/nvme/nvme_rdma.o 00:02:10.852 LIB libspdk_thread.a 00:02:10.852 SO libspdk_thread.so.11.0 00:02:11.113 SYMLINK libspdk_thread.so 00:02:11.375 CC lib/virtio/virtio.o 00:02:11.375 CC lib/virtio/virtio_vhost_user.o 00:02:11.375 CC lib/virtio/virtio_vfio_user.o 00:02:11.375 CC lib/virtio/virtio_pci.o 00:02:11.375 CC lib/fsdev/fsdev.o 00:02:11.375 CC lib/vfu_tgt/tgt_endpoint.o 00:02:11.375 CC lib/fsdev/fsdev_io.o 00:02:11.375 CC lib/vfu_tgt/tgt_rpc.o 00:02:11.375 CC lib/accel/accel.o 00:02:11.375 CC lib/fsdev/fsdev_rpc.o 00:02:11.375 CC lib/accel/accel_rpc.o 00:02:11.375 CC lib/accel/accel_sw.o 00:02:11.375 CC lib/blob/blobstore.o 00:02:11.375 CC lib/blob/request.o 00:02:11.375 CC lib/blob/zeroes.o 00:02:11.375 CC lib/blob/blob_bs_dev.o 00:02:11.375 CC lib/init/json_config.o 00:02:11.375 CC lib/init/subsystem.o 00:02:11.375 CC lib/init/subsystem_rpc.o 00:02:11.375 CC lib/init/rpc.o 00:02:11.635 LIB libspdk_init.a 00:02:11.635 SO libspdk_init.so.6.0 00:02:11.635 LIB libspdk_virtio.a 00:02:11.895 LIB libspdk_vfu_tgt.a 00:02:11.895 SO libspdk_virtio.so.7.0 00:02:11.895 SO libspdk_vfu_tgt.so.3.0 00:02:11.895 SYMLINK libspdk_init.so 00:02:11.895 SYMLINK libspdk_virtio.so 00:02:11.895 SYMLINK libspdk_vfu_tgt.so 00:02:11.896 LIB libspdk_fsdev.a 00:02:12.157 SO libspdk_fsdev.so.2.0 00:02:12.157 SYMLINK libspdk_fsdev.so 00:02:12.157 CC lib/event/app.o 00:02:12.157 CC lib/event/reactor.o 00:02:12.157 CC lib/event/log_rpc.o 00:02:12.157 CC lib/event/app_rpc.o 00:02:12.157 CC lib/event/scheduler_static.o 00:02:12.432 LIB libspdk_accel.a 00:02:12.432 LIB libspdk_nvme.a 00:02:12.432 SO libspdk_accel.so.16.0 00:02:12.432 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:12.432 SYMLINK libspdk_accel.so 00:02:12.432 SO libspdk_nvme.so.15.0 00:02:12.752 LIB libspdk_event.a 00:02:12.752 SO libspdk_event.so.14.0 00:02:12.752 SYMLINK libspdk_event.so 00:02:12.752 SYMLINK libspdk_nvme.so 00:02:12.752 CC lib/bdev/bdev.o 00:02:12.752 CC lib/bdev/bdev_rpc.o 00:02:12.752 CC lib/bdev/part.o 00:02:12.752 CC lib/bdev/bdev_zone.o 00:02:13.050 CC lib/bdev/scsi_nvme.o 00:02:13.050 LIB libspdk_fuse_dispatcher.a 00:02:13.050 SO libspdk_fuse_dispatcher.so.1.0 00:02:13.344 SYMLINK libspdk_fuse_dispatcher.so 00:02:14.287 LIB libspdk_blob.a 00:02:14.287 SO libspdk_blob.so.11.0 00:02:14.287 SYMLINK libspdk_blob.so 00:02:14.548 CC lib/lvol/lvol.o 00:02:14.548 CC lib/blobfs/blobfs.o 00:02:14.548 CC lib/blobfs/tree.o 00:02:15.119 LIB libspdk_bdev.a 00:02:15.119 SO libspdk_bdev.so.17.0 00:02:15.119 SYMLINK libspdk_bdev.so 00:02:15.380 LIB libspdk_blobfs.a 00:02:15.380 LIB libspdk_lvol.a 00:02:15.380 SO libspdk_blobfs.so.10.0 00:02:15.380 SO libspdk_lvol.so.10.0 00:02:15.380 SYMLINK libspdk_blobfs.so 00:02:15.380 SYMLINK libspdk_lvol.so 00:02:15.640 CC lib/scsi/dev.o 00:02:15.640 CC lib/scsi/port.o 00:02:15.640 CC lib/scsi/lun.o 00:02:15.640 CC lib/scsi/scsi.o 00:02:15.640 CC lib/scsi/scsi_bdev.o 00:02:15.640 CC lib/scsi/scsi_pr.o 00:02:15.640 CC lib/scsi/scsi_rpc.o 00:02:15.640 CC lib/scsi/task.o 00:02:15.640 CC lib/nvmf/ctrlr.o 00:02:15.640 CC lib/nvmf/ctrlr_discovery.o 00:02:15.640 CC lib/nvmf/ctrlr_bdev.o 00:02:15.640 CC lib/nvmf/subsystem.o 00:02:15.640 CC lib/nvmf/nvmf.o 00:02:15.640 CC lib/nvmf/nvmf_rpc.o 00:02:15.640 CC lib/nvmf/transport.o 00:02:15.640 CC lib/nvmf/tcp.o 00:02:15.640 CC lib/nvmf/stubs.o 00:02:15.640 CC lib/nvmf/vfio_user.o 00:02:15.640 CC lib/nvmf/mdns_server.o 00:02:15.640 CC lib/ublk/ublk.o 00:02:15.640 CC lib/ublk/ublk_rpc.o 00:02:15.640 CC lib/nbd/nbd.o 00:02:15.640 CC lib/nvmf/rdma.o 00:02:15.640 CC lib/nbd/nbd_rpc.o 00:02:15.640 CC lib/nvmf/auth.o 00:02:15.640 CC lib/ftl/ftl_core.o 00:02:15.640 CC lib/ftl/ftl_init.o 00:02:15.640 CC lib/ftl/ftl_layout.o 00:02:15.640 CC lib/ftl/ftl_debug.o 00:02:15.640 CC lib/ftl/ftl_io.o 00:02:15.640 CC lib/ftl/ftl_sb.o 00:02:15.640 CC lib/ftl/ftl_l2p.o 00:02:15.640 CC lib/ftl/ftl_l2p_flat.o 00:02:15.640 CC lib/ftl/ftl_nv_cache.o 00:02:15.640 CC lib/ftl/ftl_band.o 00:02:15.640 CC lib/ftl/ftl_band_ops.o 00:02:15.640 CC lib/ftl/ftl_writer.o 00:02:15.640 CC lib/ftl/ftl_rq.o 00:02:15.640 CC lib/ftl/ftl_p2l.o 00:02:15.640 CC lib/ftl/ftl_reloc.o 00:02:15.640 CC lib/ftl/ftl_l2p_cache.o 00:02:15.640 CC lib/ftl/ftl_p2l_log.o 00:02:15.640 CC lib/ftl/mngt/ftl_mngt.o 00:02:15.640 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:15.640 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:15.640 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:15.640 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:15.640 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:15.640 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:15.640 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:15.640 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:15.640 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:15.640 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:15.640 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:15.640 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:15.640 CC lib/ftl/utils/ftl_conf.o 00:02:15.640 CC lib/ftl/utils/ftl_mempool.o 00:02:15.640 CC lib/ftl/utils/ftl_md.o 00:02:15.640 CC lib/ftl/utils/ftl_bitmap.o 00:02:15.640 CC lib/ftl/utils/ftl_property.o 00:02:15.640 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:15.640 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:15.640 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:15.640 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:15.640 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:15.640 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:15.640 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:15.640 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:15.640 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:15.640 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:15.640 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:15.640 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:15.640 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:15.640 CC lib/ftl/base/ftl_base_bdev.o 00:02:15.640 CC lib/ftl/base/ftl_base_dev.o 00:02:15.640 CC lib/ftl/ftl_trace.o 00:02:16.210 LIB libspdk_nbd.a 00:02:16.210 SO libspdk_nbd.so.7.0 00:02:16.210 LIB libspdk_scsi.a 00:02:16.210 SO libspdk_scsi.so.9.0 00:02:16.210 SYMLINK libspdk_nbd.so 00:02:16.210 SYMLINK libspdk_scsi.so 00:02:16.210 LIB libspdk_ublk.a 00:02:16.471 SO libspdk_ublk.so.3.0 00:02:16.471 SYMLINK libspdk_ublk.so 00:02:16.731 CC lib/iscsi/conn.o 00:02:16.731 CC lib/iscsi/init_grp.o 00:02:16.731 CC lib/iscsi/iscsi.o 00:02:16.731 CC lib/iscsi/param.o 00:02:16.731 CC lib/iscsi/portal_grp.o 00:02:16.731 LIB libspdk_ftl.a 00:02:16.731 CC lib/iscsi/tgt_node.o 00:02:16.731 CC lib/iscsi/task.o 00:02:16.731 CC lib/iscsi/iscsi_subsystem.o 00:02:16.731 CC lib/iscsi/iscsi_rpc.o 00:02:16.731 CC lib/vhost/vhost.o 00:02:16.731 CC lib/vhost/vhost_rpc.o 00:02:16.731 CC lib/vhost/vhost_blk.o 00:02:16.731 CC lib/vhost/vhost_scsi.o 00:02:16.731 CC lib/vhost/rte_vhost_user.o 00:02:16.731 SO libspdk_ftl.so.9.0 00:02:16.991 SYMLINK libspdk_ftl.so 00:02:17.561 LIB libspdk_nvmf.a 00:02:17.561 SO libspdk_nvmf.so.20.0 00:02:17.561 LIB libspdk_vhost.a 00:02:17.561 SO libspdk_vhost.so.8.0 00:02:17.821 SYMLINK libspdk_nvmf.so 00:02:17.821 SYMLINK libspdk_vhost.so 00:02:17.821 LIB libspdk_iscsi.a 00:02:17.821 SO libspdk_iscsi.so.8.0 00:02:18.082 SYMLINK libspdk_iscsi.so 00:02:18.653 CC module/env_dpdk/env_dpdk_rpc.o 00:02:18.653 CC module/vfu_device/vfu_virtio.o 00:02:18.653 CC module/vfu_device/vfu_virtio_blk.o 00:02:18.653 CC module/vfu_device/vfu_virtio_scsi.o 00:02:18.653 CC module/vfu_device/vfu_virtio_rpc.o 00:02:18.653 CC module/vfu_device/vfu_virtio_fs.o 00:02:18.653 CC module/accel/iaa/accel_iaa.o 00:02:18.654 CC module/accel/iaa/accel_iaa_rpc.o 00:02:18.654 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:18.654 CC module/accel/error/accel_error.o 00:02:18.654 CC module/accel/error/accel_error_rpc.o 00:02:18.654 CC module/keyring/linux/keyring.o 00:02:18.654 CC module/accel/ioat/accel_ioat.o 00:02:18.654 CC module/keyring/linux/keyring_rpc.o 00:02:18.654 CC module/accel/ioat/accel_ioat_rpc.o 00:02:18.654 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:18.654 CC module/sock/posix/posix.o 00:02:18.654 CC module/keyring/file/keyring.o 00:02:18.654 LIB libspdk_env_dpdk_rpc.a 00:02:18.654 CC module/accel/dsa/accel_dsa_rpc.o 00:02:18.654 CC module/accel/dsa/accel_dsa.o 00:02:18.654 CC module/keyring/file/keyring_rpc.o 00:02:18.654 CC module/fsdev/aio/fsdev_aio.o 00:02:18.654 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:18.654 CC module/fsdev/aio/linux_aio_mgr.o 00:02:18.654 CC module/blob/bdev/blob_bdev.o 00:02:18.654 CC module/scheduler/gscheduler/gscheduler.o 00:02:18.914 SO libspdk_env_dpdk_rpc.so.6.0 00:02:18.914 SYMLINK libspdk_env_dpdk_rpc.so 00:02:18.914 LIB libspdk_keyring_linux.a 00:02:18.914 LIB libspdk_keyring_file.a 00:02:18.914 LIB libspdk_scheduler_gscheduler.a 00:02:18.914 LIB libspdk_accel_ioat.a 00:02:18.914 LIB libspdk_scheduler_dpdk_governor.a 00:02:18.914 SO libspdk_keyring_linux.so.1.0 00:02:18.914 LIB libspdk_scheduler_dynamic.a 00:02:18.914 LIB libspdk_accel_iaa.a 00:02:18.914 SO libspdk_keyring_file.so.2.0 00:02:18.914 LIB libspdk_accel_error.a 00:02:18.914 SO libspdk_scheduler_gscheduler.so.4.0 00:02:18.914 SO libspdk_accel_ioat.so.6.0 00:02:18.914 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:18.914 SO libspdk_scheduler_dynamic.so.4.0 00:02:18.914 SO libspdk_accel_iaa.so.3.0 00:02:18.914 SO libspdk_accel_error.so.2.0 00:02:19.175 SYMLINK libspdk_keyring_linux.so 00:02:19.175 SYMLINK libspdk_scheduler_gscheduler.so 00:02:19.175 SYMLINK libspdk_keyring_file.so 00:02:19.175 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:19.175 LIB libspdk_accel_dsa.a 00:02:19.175 SYMLINK libspdk_accel_ioat.so 00:02:19.175 LIB libspdk_blob_bdev.a 00:02:19.175 SYMLINK libspdk_scheduler_dynamic.so 00:02:19.175 SYMLINK libspdk_accel_iaa.so 00:02:19.175 SYMLINK libspdk_accel_error.so 00:02:19.175 SO libspdk_accel_dsa.so.5.0 00:02:19.175 SO libspdk_blob_bdev.so.11.0 00:02:19.175 LIB libspdk_vfu_device.a 00:02:19.175 SYMLINK libspdk_blob_bdev.so 00:02:19.175 SYMLINK libspdk_accel_dsa.so 00:02:19.175 SO libspdk_vfu_device.so.3.0 00:02:19.435 SYMLINK libspdk_vfu_device.so 00:02:19.435 LIB libspdk_fsdev_aio.a 00:02:19.435 SO libspdk_fsdev_aio.so.1.0 00:02:19.435 LIB libspdk_sock_posix.a 00:02:19.435 SO libspdk_sock_posix.so.6.0 00:02:19.435 SYMLINK libspdk_fsdev_aio.so 00:02:19.695 SYMLINK libspdk_sock_posix.so 00:02:19.696 CC module/bdev/gpt/vbdev_gpt.o 00:02:19.696 CC module/bdev/gpt/gpt.o 00:02:19.696 CC module/bdev/delay/vbdev_delay.o 00:02:19.696 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:19.696 CC module/bdev/error/vbdev_error.o 00:02:19.696 CC module/bdev/error/vbdev_error_rpc.o 00:02:19.696 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:19.696 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:19.696 CC module/bdev/nvme/bdev_nvme.o 00:02:19.696 CC module/bdev/raid/bdev_raid.o 00:02:19.696 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:19.696 CC module/bdev/null/bdev_null.o 00:02:19.696 CC module/bdev/nvme/nvme_rpc.o 00:02:19.696 CC module/bdev/raid/bdev_raid_rpc.o 00:02:19.696 CC module/bdev/null/bdev_null_rpc.o 00:02:19.696 CC module/bdev/raid/bdev_raid_sb.o 00:02:19.696 CC module/bdev/nvme/bdev_mdns_client.o 00:02:19.696 CC module/bdev/lvol/vbdev_lvol.o 00:02:19.696 CC module/bdev/ftl/bdev_ftl.o 00:02:19.696 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:19.696 CC module/bdev/nvme/vbdev_opal.o 00:02:19.696 CC module/bdev/raid/raid0.o 00:02:19.696 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:19.696 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:19.696 CC module/bdev/raid/raid1.o 00:02:19.696 CC module/bdev/raid/concat.o 00:02:19.696 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:19.696 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:19.696 CC module/bdev/passthru/vbdev_passthru.o 00:02:19.696 CC module/blobfs/bdev/blobfs_bdev.o 00:02:19.696 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:19.696 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:19.696 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:19.696 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:19.696 CC module/bdev/aio/bdev_aio.o 00:02:19.696 CC module/bdev/malloc/bdev_malloc.o 00:02:19.696 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:19.696 CC module/bdev/aio/bdev_aio_rpc.o 00:02:19.696 CC module/bdev/split/vbdev_split.o 00:02:19.696 CC module/bdev/split/vbdev_split_rpc.o 00:02:19.696 CC module/bdev/iscsi/bdev_iscsi.o 00:02:19.696 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:19.955 LIB libspdk_blobfs_bdev.a 00:02:20.215 SO libspdk_blobfs_bdev.so.6.0 00:02:20.215 LIB libspdk_bdev_null.a 00:02:20.215 LIB libspdk_bdev_gpt.a 00:02:20.215 LIB libspdk_bdev_split.a 00:02:20.215 SO libspdk_bdev_gpt.so.6.0 00:02:20.215 LIB libspdk_bdev_ftl.a 00:02:20.215 SO libspdk_bdev_null.so.6.0 00:02:20.215 SO libspdk_bdev_split.so.6.0 00:02:20.215 LIB libspdk_bdev_error.a 00:02:20.215 SYMLINK libspdk_blobfs_bdev.so 00:02:20.215 LIB libspdk_bdev_delay.a 00:02:20.215 LIB libspdk_bdev_passthru.a 00:02:20.215 LIB libspdk_bdev_zone_block.a 00:02:20.215 SO libspdk_bdev_ftl.so.6.0 00:02:20.215 SO libspdk_bdev_error.so.6.0 00:02:20.215 SO libspdk_bdev_zone_block.so.6.0 00:02:20.215 LIB libspdk_bdev_aio.a 00:02:20.215 SYMLINK libspdk_bdev_gpt.so 00:02:20.215 SYMLINK libspdk_bdev_split.so 00:02:20.215 SYMLINK libspdk_bdev_null.so 00:02:20.215 SO libspdk_bdev_passthru.so.6.0 00:02:20.215 SO libspdk_bdev_delay.so.6.0 00:02:20.215 LIB libspdk_bdev_malloc.a 00:02:20.215 LIB libspdk_bdev_iscsi.a 00:02:20.215 SYMLINK libspdk_bdev_ftl.so 00:02:20.215 SO libspdk_bdev_aio.so.6.0 00:02:20.215 SYMLINK libspdk_bdev_error.so 00:02:20.215 SYMLINK libspdk_bdev_zone_block.so 00:02:20.215 SO libspdk_bdev_malloc.so.6.0 00:02:20.215 SO libspdk_bdev_iscsi.so.6.0 00:02:20.215 SYMLINK libspdk_bdev_delay.so 00:02:20.215 SYMLINK libspdk_bdev_passthru.so 00:02:20.215 SYMLINK libspdk_bdev_aio.so 00:02:20.476 SYMLINK libspdk_bdev_iscsi.so 00:02:20.476 SYMLINK libspdk_bdev_malloc.so 00:02:20.476 LIB libspdk_bdev_virtio.a 00:02:20.476 LIB libspdk_bdev_lvol.a 00:02:20.476 SO libspdk_bdev_virtio.so.6.0 00:02:20.476 SO libspdk_bdev_lvol.so.6.0 00:02:20.476 SYMLINK libspdk_bdev_virtio.so 00:02:20.476 SYMLINK libspdk_bdev_lvol.so 00:02:20.737 LIB libspdk_bdev_raid.a 00:02:20.737 SO libspdk_bdev_raid.so.6.0 00:02:20.997 SYMLINK libspdk_bdev_raid.so 00:02:21.936 LIB libspdk_bdev_nvme.a 00:02:22.196 SO libspdk_bdev_nvme.so.7.1 00:02:22.196 SYMLINK libspdk_bdev_nvme.so 00:02:22.767 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:22.767 CC module/event/subsystems/vmd/vmd.o 00:02:23.028 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:23.028 CC module/event/subsystems/scheduler/scheduler.o 00:02:23.028 CC module/event/subsystems/sock/sock.o 00:02:23.028 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:23.028 CC module/event/subsystems/iobuf/iobuf.o 00:02:23.028 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:23.028 CC module/event/subsystems/keyring/keyring.o 00:02:23.028 CC module/event/subsystems/fsdev/fsdev.o 00:02:23.028 LIB libspdk_event_scheduler.a 00:02:23.028 LIB libspdk_event_vfu_tgt.a 00:02:23.028 LIB libspdk_event_fsdev.a 00:02:23.028 LIB libspdk_event_vhost_blk.a 00:02:23.028 SO libspdk_event_scheduler.so.4.0 00:02:23.028 LIB libspdk_event_vmd.a 00:02:23.028 LIB libspdk_event_sock.a 00:02:23.028 LIB libspdk_event_keyring.a 00:02:23.028 SO libspdk_event_vfu_tgt.so.3.0 00:02:23.028 LIB libspdk_event_iobuf.a 00:02:23.028 SO libspdk_event_fsdev.so.1.0 00:02:23.028 SO libspdk_event_vhost_blk.so.3.0 00:02:23.028 SO libspdk_event_vmd.so.6.0 00:02:23.028 SO libspdk_event_sock.so.5.0 00:02:23.028 SO libspdk_event_keyring.so.1.0 00:02:23.028 SYMLINK libspdk_event_scheduler.so 00:02:23.028 SO libspdk_event_iobuf.so.3.0 00:02:23.028 SYMLINK libspdk_event_vfu_tgt.so 00:02:23.028 SYMLINK libspdk_event_fsdev.so 00:02:23.288 SYMLINK libspdk_event_vhost_blk.so 00:02:23.288 SYMLINK libspdk_event_vmd.so 00:02:23.288 SYMLINK libspdk_event_sock.so 00:02:23.288 SYMLINK libspdk_event_keyring.so 00:02:23.288 SYMLINK libspdk_event_iobuf.so 00:02:23.549 CC module/event/subsystems/accel/accel.o 00:02:23.549 LIB libspdk_event_accel.a 00:02:23.810 SO libspdk_event_accel.so.6.0 00:02:23.810 SYMLINK libspdk_event_accel.so 00:02:24.071 CC module/event/subsystems/bdev/bdev.o 00:02:24.331 LIB libspdk_event_bdev.a 00:02:24.331 SO libspdk_event_bdev.so.6.0 00:02:24.331 SYMLINK libspdk_event_bdev.so 00:02:24.902 CC module/event/subsystems/scsi/scsi.o 00:02:24.903 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:24.903 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:24.903 CC module/event/subsystems/ublk/ublk.o 00:02:24.903 CC module/event/subsystems/nbd/nbd.o 00:02:24.903 LIB libspdk_event_ublk.a 00:02:24.903 LIB libspdk_event_scsi.a 00:02:24.903 LIB libspdk_event_nbd.a 00:02:24.903 SO libspdk_event_ublk.so.3.0 00:02:24.903 SO libspdk_event_scsi.so.6.0 00:02:24.903 SO libspdk_event_nbd.so.6.0 00:02:24.903 LIB libspdk_event_nvmf.a 00:02:24.903 SYMLINK libspdk_event_ublk.so 00:02:25.164 SYMLINK libspdk_event_scsi.so 00:02:25.164 SO libspdk_event_nvmf.so.6.0 00:02:25.164 SYMLINK libspdk_event_nbd.so 00:02:25.164 SYMLINK libspdk_event_nvmf.so 00:02:25.425 CC module/event/subsystems/iscsi/iscsi.o 00:02:25.425 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:25.686 LIB libspdk_event_vhost_scsi.a 00:02:25.686 LIB libspdk_event_iscsi.a 00:02:25.686 SO libspdk_event_vhost_scsi.so.3.0 00:02:25.686 SO libspdk_event_iscsi.so.6.0 00:02:25.686 SYMLINK libspdk_event_vhost_scsi.so 00:02:25.686 SYMLINK libspdk_event_iscsi.so 00:02:25.946 SO libspdk.so.6.0 00:02:25.946 SYMLINK libspdk.so 00:02:26.207 CC test/rpc_client/rpc_client_test.o 00:02:26.207 TEST_HEADER include/spdk/accel.h 00:02:26.207 TEST_HEADER include/spdk/assert.h 00:02:26.207 TEST_HEADER include/spdk/accel_module.h 00:02:26.207 CC app/trace_record/trace_record.o 00:02:26.207 TEST_HEADER include/spdk/barrier.h 00:02:26.207 TEST_HEADER include/spdk/base64.h 00:02:26.207 TEST_HEADER include/spdk/bdev.h 00:02:26.207 TEST_HEADER include/spdk/bdev_module.h 00:02:26.207 TEST_HEADER include/spdk/bdev_zone.h 00:02:26.207 TEST_HEADER include/spdk/bit_array.h 00:02:26.207 TEST_HEADER include/spdk/bit_pool.h 00:02:26.207 CC app/spdk_lspci/spdk_lspci.o 00:02:26.207 TEST_HEADER include/spdk/blob_bdev.h 00:02:26.207 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:26.207 TEST_HEADER include/spdk/blobfs.h 00:02:26.207 CXX app/trace/trace.o 00:02:26.207 TEST_HEADER include/spdk/blob.h 00:02:26.207 TEST_HEADER include/spdk/conf.h 00:02:26.207 TEST_HEADER include/spdk/config.h 00:02:26.207 CC app/spdk_top/spdk_top.o 00:02:26.207 TEST_HEADER include/spdk/crc16.h 00:02:26.207 TEST_HEADER include/spdk/cpuset.h 00:02:26.207 TEST_HEADER include/spdk/crc32.h 00:02:26.207 CC app/spdk_nvme_discover/discovery_aer.o 00:02:26.207 TEST_HEADER include/spdk/crc64.h 00:02:26.207 CC app/spdk_nvme_identify/identify.o 00:02:26.207 TEST_HEADER include/spdk/dif.h 00:02:26.207 CC app/spdk_nvme_perf/perf.o 00:02:26.207 TEST_HEADER include/spdk/dma.h 00:02:26.207 TEST_HEADER include/spdk/endian.h 00:02:26.207 TEST_HEADER include/spdk/env_dpdk.h 00:02:26.207 TEST_HEADER include/spdk/env.h 00:02:26.207 TEST_HEADER include/spdk/event.h 00:02:26.207 TEST_HEADER include/spdk/fd_group.h 00:02:26.207 TEST_HEADER include/spdk/fd.h 00:02:26.207 TEST_HEADER include/spdk/file.h 00:02:26.207 TEST_HEADER include/spdk/fsdev_module.h 00:02:26.207 TEST_HEADER include/spdk/fsdev.h 00:02:26.207 TEST_HEADER include/spdk/ftl.h 00:02:26.207 TEST_HEADER include/spdk/hexlify.h 00:02:26.207 TEST_HEADER include/spdk/gpt_spec.h 00:02:26.207 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:26.207 TEST_HEADER include/spdk/histogram_data.h 00:02:26.207 TEST_HEADER include/spdk/idxd.h 00:02:26.207 TEST_HEADER include/spdk/idxd_spec.h 00:02:26.207 TEST_HEADER include/spdk/ioat.h 00:02:26.207 TEST_HEADER include/spdk/init.h 00:02:26.207 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:26.207 TEST_HEADER include/spdk/ioat_spec.h 00:02:26.207 TEST_HEADER include/spdk/iscsi_spec.h 00:02:26.207 TEST_HEADER include/spdk/json.h 00:02:26.207 TEST_HEADER include/spdk/jsonrpc.h 00:02:26.207 TEST_HEADER include/spdk/keyring.h 00:02:26.207 TEST_HEADER include/spdk/keyring_module.h 00:02:26.207 TEST_HEADER include/spdk/likely.h 00:02:26.207 TEST_HEADER include/spdk/log.h 00:02:26.207 TEST_HEADER include/spdk/lvol.h 00:02:26.207 TEST_HEADER include/spdk/md5.h 00:02:26.207 TEST_HEADER include/spdk/memory.h 00:02:26.207 TEST_HEADER include/spdk/mmio.h 00:02:26.207 CC app/spdk_tgt/spdk_tgt.o 00:02:26.207 TEST_HEADER include/spdk/nbd.h 00:02:26.207 TEST_HEADER include/spdk/net.h 00:02:26.207 CC app/nvmf_tgt/nvmf_main.o 00:02:26.207 TEST_HEADER include/spdk/notify.h 00:02:26.207 TEST_HEADER include/spdk/nvme.h 00:02:26.207 CC app/spdk_dd/spdk_dd.o 00:02:26.207 TEST_HEADER include/spdk/nvme_intel.h 00:02:26.207 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:26.207 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:26.207 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:26.207 TEST_HEADER include/spdk/nvme_spec.h 00:02:26.207 TEST_HEADER include/spdk/nvme_zns.h 00:02:26.207 CC app/iscsi_tgt/iscsi_tgt.o 00:02:26.207 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:26.207 TEST_HEADER include/spdk/nvmf.h 00:02:26.207 TEST_HEADER include/spdk/nvmf_spec.h 00:02:26.207 TEST_HEADER include/spdk/nvmf_transport.h 00:02:26.207 TEST_HEADER include/spdk/pci_ids.h 00:02:26.207 TEST_HEADER include/spdk/opal.h 00:02:26.207 TEST_HEADER include/spdk/opal_spec.h 00:02:26.207 TEST_HEADER include/spdk/pipe.h 00:02:26.207 TEST_HEADER include/spdk/queue.h 00:02:26.207 TEST_HEADER include/spdk/reduce.h 00:02:26.207 TEST_HEADER include/spdk/rpc.h 00:02:26.207 TEST_HEADER include/spdk/sock.h 00:02:26.207 TEST_HEADER include/spdk/scheduler.h 00:02:26.207 TEST_HEADER include/spdk/scsi_spec.h 00:02:26.207 TEST_HEADER include/spdk/scsi.h 00:02:26.207 TEST_HEADER include/spdk/string.h 00:02:26.207 TEST_HEADER include/spdk/stdinc.h 00:02:26.207 TEST_HEADER include/spdk/thread.h 00:02:26.207 TEST_HEADER include/spdk/trace.h 00:02:26.207 TEST_HEADER include/spdk/tree.h 00:02:26.207 TEST_HEADER include/spdk/trace_parser.h 00:02:26.207 TEST_HEADER include/spdk/ublk.h 00:02:26.207 TEST_HEADER include/spdk/util.h 00:02:26.207 TEST_HEADER include/spdk/uuid.h 00:02:26.207 TEST_HEADER include/spdk/version.h 00:02:26.207 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:26.207 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:26.467 TEST_HEADER include/spdk/vhost.h 00:02:26.467 TEST_HEADER include/spdk/zipf.h 00:02:26.467 TEST_HEADER include/spdk/xor.h 00:02:26.467 TEST_HEADER include/spdk/vmd.h 00:02:26.467 CXX test/cpp_headers/accel.o 00:02:26.467 CXX test/cpp_headers/accel_module.o 00:02:26.467 CXX test/cpp_headers/assert.o 00:02:26.467 CXX test/cpp_headers/barrier.o 00:02:26.467 CXX test/cpp_headers/bdev.o 00:02:26.467 CXX test/cpp_headers/base64.o 00:02:26.467 CXX test/cpp_headers/bdev_zone.o 00:02:26.467 CXX test/cpp_headers/bit_array.o 00:02:26.467 CXX test/cpp_headers/bdev_module.o 00:02:26.467 CXX test/cpp_headers/blob_bdev.o 00:02:26.467 CXX test/cpp_headers/bit_pool.o 00:02:26.467 CXX test/cpp_headers/blobfs.o 00:02:26.467 CXX test/cpp_headers/blobfs_bdev.o 00:02:26.467 CXX test/cpp_headers/blob.o 00:02:26.467 CXX test/cpp_headers/conf.o 00:02:26.467 CXX test/cpp_headers/config.o 00:02:26.467 CXX test/cpp_headers/cpuset.o 00:02:26.467 CXX test/cpp_headers/crc32.o 00:02:26.467 CXX test/cpp_headers/crc16.o 00:02:26.467 CXX test/cpp_headers/dif.o 00:02:26.467 CXX test/cpp_headers/dma.o 00:02:26.467 CXX test/cpp_headers/crc64.o 00:02:26.467 CXX test/cpp_headers/endian.o 00:02:26.467 CXX test/cpp_headers/env_dpdk.o 00:02:26.467 CXX test/cpp_headers/env.o 00:02:26.467 CXX test/cpp_headers/event.o 00:02:26.467 CXX test/cpp_headers/fd.o 00:02:26.467 CXX test/cpp_headers/fd_group.o 00:02:26.467 CXX test/cpp_headers/fsdev.o 00:02:26.467 CXX test/cpp_headers/file.o 00:02:26.467 CXX test/cpp_headers/ftl.o 00:02:26.467 CXX test/cpp_headers/fsdev_module.o 00:02:26.467 CXX test/cpp_headers/fuse_dispatcher.o 00:02:26.467 CXX test/cpp_headers/gpt_spec.o 00:02:26.467 CXX test/cpp_headers/idxd.o 00:02:26.467 CXX test/cpp_headers/histogram_data.o 00:02:26.467 CXX test/cpp_headers/hexlify.o 00:02:26.467 CXX test/cpp_headers/idxd_spec.o 00:02:26.467 CXX test/cpp_headers/init.o 00:02:26.467 CXX test/cpp_headers/ioat_spec.o 00:02:26.467 CXX test/cpp_headers/ioat.o 00:02:26.467 CXX test/cpp_headers/iscsi_spec.o 00:02:26.467 CXX test/cpp_headers/jsonrpc.o 00:02:26.467 CXX test/cpp_headers/json.o 00:02:26.467 CXX test/cpp_headers/keyring.o 00:02:26.467 CXX test/cpp_headers/likely.o 00:02:26.467 CXX test/cpp_headers/md5.o 00:02:26.467 CXX test/cpp_headers/keyring_module.o 00:02:26.467 CXX test/cpp_headers/log.o 00:02:26.467 CXX test/cpp_headers/lvol.o 00:02:26.467 CXX test/cpp_headers/mmio.o 00:02:26.467 CXX test/cpp_headers/net.o 00:02:26.467 CXX test/cpp_headers/nvme.o 00:02:26.467 CXX test/cpp_headers/nbd.o 00:02:26.467 CXX test/cpp_headers/memory.o 00:02:26.467 CXX test/cpp_headers/notify.o 00:02:26.467 CXX test/cpp_headers/nvme_intel.o 00:02:26.467 CXX test/cpp_headers/nvme_ocssd.o 00:02:26.467 CXX test/cpp_headers/nvme_spec.o 00:02:26.467 CXX test/cpp_headers/nvme_zns.o 00:02:26.467 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:26.467 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:26.467 CXX test/cpp_headers/nvmf_cmd.o 00:02:26.467 CXX test/cpp_headers/nvmf.o 00:02:26.467 CXX test/cpp_headers/opal.o 00:02:26.467 CXX test/cpp_headers/nvmf_spec.o 00:02:26.467 CXX test/cpp_headers/nvmf_transport.o 00:02:26.467 CXX test/cpp_headers/opal_spec.o 00:02:26.467 CXX test/cpp_headers/pipe.o 00:02:26.467 CXX test/cpp_headers/pci_ids.o 00:02:26.467 CXX test/cpp_headers/queue.o 00:02:26.467 CXX test/cpp_headers/reduce.o 00:02:26.467 CXX test/cpp_headers/scheduler.o 00:02:26.467 CXX test/cpp_headers/rpc.o 00:02:26.467 CC test/thread/poller_perf/poller_perf.o 00:02:26.467 CXX test/cpp_headers/sock.o 00:02:26.467 CXX test/cpp_headers/scsi_spec.o 00:02:26.467 CXX test/cpp_headers/stdinc.o 00:02:26.467 CXX test/cpp_headers/scsi.o 00:02:26.467 CXX test/cpp_headers/string.o 00:02:26.467 CXX test/cpp_headers/thread.o 00:02:26.467 CXX test/cpp_headers/trace.o 00:02:26.467 CXX test/cpp_headers/trace_parser.o 00:02:26.467 CXX test/cpp_headers/tree.o 00:02:26.467 CXX test/cpp_headers/ublk.o 00:02:26.467 CXX test/cpp_headers/uuid.o 00:02:26.467 CXX test/cpp_headers/util.o 00:02:26.467 CXX test/cpp_headers/vfio_user_spec.o 00:02:26.467 CXX test/cpp_headers/version.o 00:02:26.467 CXX test/cpp_headers/vfio_user_pci.o 00:02:26.467 CXX test/cpp_headers/vhost.o 00:02:26.467 CXX test/cpp_headers/vmd.o 00:02:26.467 CXX test/cpp_headers/xor.o 00:02:26.467 CC test/app/histogram_perf/histogram_perf.o 00:02:26.467 CC test/env/pci/pci_ut.o 00:02:26.467 CXX test/cpp_headers/zipf.o 00:02:26.467 CC test/env/vtophys/vtophys.o 00:02:26.467 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:26.467 CC test/app/jsoncat/jsoncat.o 00:02:26.467 CC test/app/stub/stub.o 00:02:26.467 LINK spdk_lspci 00:02:26.467 CC examples/ioat/perf/perf.o 00:02:26.467 CC examples/ioat/verify/verify.o 00:02:26.467 LINK rpc_client_test 00:02:26.467 CC test/env/memory/memory_ut.o 00:02:26.467 CC examples/util/zipf/zipf.o 00:02:26.467 CC app/fio/bdev/fio_plugin.o 00:02:26.467 CC test/app/bdev_svc/bdev_svc.o 00:02:26.467 CC app/fio/nvme/fio_plugin.o 00:02:26.467 LINK spdk_nvme_discover 00:02:26.467 CC test/dma/test_dma/test_dma.o 00:02:26.729 LINK interrupt_tgt 00:02:26.729 LINK nvmf_tgt 00:02:26.729 CC test/env/mem_callbacks/mem_callbacks.o 00:02:26.729 LINK iscsi_tgt 00:02:26.729 LINK spdk_trace_record 00:02:26.729 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:26.729 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:26.988 LINK spdk_tgt 00:02:26.988 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:26.988 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:26.988 LINK stub 00:02:26.988 LINK histogram_perf 00:02:26.988 LINK jsoncat 00:02:26.988 LINK poller_perf 00:02:27.247 LINK vtophys 00:02:27.247 LINK env_dpdk_post_init 00:02:27.247 LINK zipf 00:02:27.247 LINK spdk_dd 00:02:27.247 LINK spdk_trace 00:02:27.247 LINK bdev_svc 00:02:27.247 LINK ioat_perf 00:02:27.247 LINK verify 00:02:27.247 LINK test_dma 00:02:27.508 LINK pci_ut 00:02:27.508 LINK spdk_nvme 00:02:27.508 LINK vhost_fuzz 00:02:27.508 LINK spdk_top 00:02:27.508 LINK nvme_fuzz 00:02:27.508 LINK spdk_bdev 00:02:27.508 CC test/event/event_perf/event_perf.o 00:02:27.508 CC test/event/reactor_perf/reactor_perf.o 00:02:27.508 CC app/vhost/vhost.o 00:02:27.508 CC test/event/reactor/reactor.o 00:02:27.508 LINK spdk_nvme_identify 00:02:27.508 CC test/event/app_repeat/app_repeat.o 00:02:27.508 LINK spdk_nvme_perf 00:02:27.508 CC test/event/scheduler/scheduler.o 00:02:27.508 CC examples/sock/hello_world/hello_sock.o 00:02:27.768 CC examples/vmd/lsvmd/lsvmd.o 00:02:27.768 CC examples/vmd/led/led.o 00:02:27.768 CC examples/idxd/perf/perf.o 00:02:27.768 LINK mem_callbacks 00:02:27.768 CC examples/thread/thread/thread_ex.o 00:02:27.768 LINK reactor_perf 00:02:27.768 LINK reactor 00:02:27.768 LINK event_perf 00:02:27.768 LINK app_repeat 00:02:27.768 LINK vhost 00:02:27.768 LINK lsvmd 00:02:27.768 LINK led 00:02:27.768 LINK scheduler 00:02:28.028 LINK hello_sock 00:02:28.028 CC test/nvme/overhead/overhead.o 00:02:28.028 CC test/nvme/e2edp/nvme_dp.o 00:02:28.028 CC test/nvme/sgl/sgl.o 00:02:28.028 CC test/nvme/reset/reset.o 00:02:28.028 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:28.028 CC test/nvme/connect_stress/connect_stress.o 00:02:28.028 CC test/nvme/aer/aer.o 00:02:28.028 CC test/nvme/fdp/fdp.o 00:02:28.028 CC test/nvme/compliance/nvme_compliance.o 00:02:28.028 CC test/nvme/err_injection/err_injection.o 00:02:28.028 CC test/nvme/fused_ordering/fused_ordering.o 00:02:28.028 CC test/nvme/reserve/reserve.o 00:02:28.028 CC test/nvme/startup/startup.o 00:02:28.028 CC test/nvme/boot_partition/boot_partition.o 00:02:28.028 CC test/nvme/simple_copy/simple_copy.o 00:02:28.028 CC test/nvme/cuse/cuse.o 00:02:28.028 CC test/blobfs/mkfs/mkfs.o 00:02:28.028 CC test/accel/dif/dif.o 00:02:28.028 LINK thread 00:02:28.028 LINK memory_ut 00:02:28.028 LINK idxd_perf 00:02:28.028 CC test/lvol/esnap/esnap.o 00:02:28.028 LINK doorbell_aers 00:02:28.028 LINK boot_partition 00:02:28.287 LINK connect_stress 00:02:28.287 LINK overhead 00:02:28.287 LINK reserve 00:02:28.287 LINK startup 00:02:28.287 LINK err_injection 00:02:28.287 LINK fused_ordering 00:02:28.287 LINK sgl 00:02:28.287 LINK simple_copy 00:02:28.287 LINK mkfs 00:02:28.287 LINK aer 00:02:28.287 LINK nvme_dp 00:02:28.287 LINK reset 00:02:28.287 LINK fdp 00:02:28.287 LINK nvme_compliance 00:02:28.287 CC examples/nvme/reconnect/reconnect.o 00:02:28.287 CC examples/nvme/hotplug/hotplug.o 00:02:28.287 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:28.287 CC examples/nvme/arbitration/arbitration.o 00:02:28.287 LINK iscsi_fuzz 00:02:28.287 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:28.287 CC examples/nvme/abort/abort.o 00:02:28.287 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:28.287 CC examples/nvme/hello_world/hello_world.o 00:02:28.548 LINK hotplug 00:02:28.548 CC examples/accel/perf/accel_perf.o 00:02:28.548 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:28.548 CC examples/blob/cli/blobcli.o 00:02:28.548 LINK pmr_persistence 00:02:28.548 CC examples/blob/hello_world/hello_blob.o 00:02:28.548 LINK dif 00:02:28.548 LINK cmb_copy 00:02:28.548 LINK hello_world 00:02:28.808 LINK reconnect 00:02:28.808 LINK arbitration 00:02:28.808 LINK abort 00:02:28.808 LINK nvme_manage 00:02:28.808 LINK hello_blob 00:02:28.808 LINK hello_fsdev 00:02:29.069 LINK accel_perf 00:02:29.069 LINK blobcli 00:02:29.069 LINK cuse 00:02:29.331 CC test/bdev/bdevio/bdevio.o 00:02:29.591 CC examples/bdev/hello_world/hello_bdev.o 00:02:29.591 CC examples/bdev/bdevperf/bdevperf.o 00:02:29.591 LINK bdevio 00:02:29.852 LINK hello_bdev 00:02:30.423 LINK bdevperf 00:02:30.995 CC examples/nvmf/nvmf/nvmf.o 00:02:31.255 LINK nvmf 00:02:32.198 LINK esnap 00:02:32.769 00:02:32.769 real 0m53.926s 00:02:32.769 user 7m49.901s 00:02:32.769 sys 4m27.984s 00:02:32.769 07:03:07 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:32.769 07:03:07 make -- common/autotest_common.sh@10 -- $ set +x 00:02:32.769 ************************************ 00:02:32.769 END TEST make 00:02:32.769 ************************************ 00:02:32.769 07:03:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:32.769 07:03:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:32.769 07:03:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:32.769 07:03:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.769 07:03:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:32.769 07:03:07 -- pm/common@44 -- $ pid=939204 00:02:32.769 07:03:07 -- pm/common@50 -- $ kill -TERM 939204 00:02:32.769 07:03:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.769 07:03:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:32.769 07:03:07 -- pm/common@44 -- $ pid=939205 00:02:32.769 07:03:07 -- pm/common@50 -- $ kill -TERM 939205 00:02:32.769 07:03:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.769 07:03:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:32.769 07:03:07 -- pm/common@44 -- $ pid=939206 00:02:32.769 07:03:07 -- pm/common@50 -- $ kill -TERM 939206 00:02:32.769 07:03:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.769 07:03:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:32.769 07:03:07 -- pm/common@44 -- $ pid=939231 00:02:32.769 07:03:07 -- pm/common@50 -- $ sudo -E kill -TERM 939231 00:02:32.769 07:03:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:32.769 07:03:07 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:32.769 07:03:07 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:32.769 07:03:07 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:32.769 07:03:07 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:32.769 07:03:07 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:32.769 07:03:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:32.769 07:03:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:32.769 07:03:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:32.769 07:03:07 -- scripts/common.sh@336 -- # IFS=.-: 00:02:32.769 07:03:07 -- scripts/common.sh@336 -- # read -ra ver1 00:02:32.769 07:03:07 -- scripts/common.sh@337 -- # IFS=.-: 00:02:32.769 07:03:07 -- scripts/common.sh@337 -- # read -ra ver2 00:02:32.769 07:03:07 -- scripts/common.sh@338 -- # local 'op=<' 00:02:32.769 07:03:07 -- scripts/common.sh@340 -- # ver1_l=2 00:02:32.769 07:03:07 -- scripts/common.sh@341 -- # ver2_l=1 00:02:32.769 07:03:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:32.769 07:03:07 -- scripts/common.sh@344 -- # case "$op" in 00:02:32.769 07:03:07 -- scripts/common.sh@345 -- # : 1 00:02:32.769 07:03:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:32.769 07:03:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:33.030 07:03:07 -- scripts/common.sh@365 -- # decimal 1 00:02:33.030 07:03:07 -- scripts/common.sh@353 -- # local d=1 00:02:33.030 07:03:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:33.030 07:03:07 -- scripts/common.sh@355 -- # echo 1 00:02:33.030 07:03:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:33.030 07:03:07 -- scripts/common.sh@366 -- # decimal 2 00:02:33.030 07:03:07 -- scripts/common.sh@353 -- # local d=2 00:02:33.030 07:03:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:33.030 07:03:07 -- scripts/common.sh@355 -- # echo 2 00:02:33.030 07:03:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:33.030 07:03:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:33.030 07:03:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:33.030 07:03:07 -- scripts/common.sh@368 -- # return 0 00:02:33.030 07:03:07 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:33.030 07:03:07 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:33.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.030 --rc genhtml_branch_coverage=1 00:02:33.030 --rc genhtml_function_coverage=1 00:02:33.030 --rc genhtml_legend=1 00:02:33.030 --rc geninfo_all_blocks=1 00:02:33.030 --rc geninfo_unexecuted_blocks=1 00:02:33.030 00:02:33.030 ' 00:02:33.030 07:03:07 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:33.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.030 --rc genhtml_branch_coverage=1 00:02:33.030 --rc genhtml_function_coverage=1 00:02:33.030 --rc genhtml_legend=1 00:02:33.030 --rc geninfo_all_blocks=1 00:02:33.030 --rc geninfo_unexecuted_blocks=1 00:02:33.030 00:02:33.030 ' 00:02:33.030 07:03:07 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:33.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.030 --rc genhtml_branch_coverage=1 00:02:33.031 --rc genhtml_function_coverage=1 00:02:33.031 --rc genhtml_legend=1 00:02:33.031 --rc geninfo_all_blocks=1 00:02:33.031 --rc geninfo_unexecuted_blocks=1 00:02:33.031 00:02:33.031 ' 00:02:33.031 07:03:07 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:33.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.031 --rc genhtml_branch_coverage=1 00:02:33.031 --rc genhtml_function_coverage=1 00:02:33.031 --rc genhtml_legend=1 00:02:33.031 --rc geninfo_all_blocks=1 00:02:33.031 --rc geninfo_unexecuted_blocks=1 00:02:33.031 00:02:33.031 ' 00:02:33.031 07:03:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:33.031 07:03:07 -- nvmf/common.sh@7 -- # uname -s 00:02:33.031 07:03:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:33.031 07:03:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:33.031 07:03:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:33.031 07:03:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:33.031 07:03:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:33.031 07:03:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:33.031 07:03:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:33.031 07:03:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:33.031 07:03:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:33.031 07:03:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:33.031 07:03:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:33.031 07:03:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:33.031 07:03:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:33.031 07:03:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:33.031 07:03:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:33.031 07:03:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:33.031 07:03:07 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:33.031 07:03:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:33.031 07:03:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:33.031 07:03:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:33.031 07:03:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:33.031 07:03:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.031 07:03:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.031 07:03:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.031 07:03:07 -- paths/export.sh@5 -- # export PATH 00:02:33.031 07:03:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.031 07:03:07 -- nvmf/common.sh@51 -- # : 0 00:02:33.031 07:03:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:33.031 07:03:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:33.031 07:03:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:33.031 07:03:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:33.031 07:03:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:33.031 07:03:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:33.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:33.031 07:03:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:33.031 07:03:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:33.031 07:03:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:33.031 07:03:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:33.031 07:03:07 -- spdk/autotest.sh@32 -- # uname -s 00:02:33.031 07:03:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:33.031 07:03:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:33.031 07:03:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:33.031 07:03:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:33.031 07:03:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:33.031 07:03:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:33.031 07:03:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:33.031 07:03:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:33.031 07:03:07 -- spdk/autotest.sh@48 -- # udevadm_pid=1004523 00:02:33.031 07:03:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:33.031 07:03:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:33.031 07:03:07 -- pm/common@17 -- # local monitor 00:02:33.031 07:03:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.031 07:03:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.031 07:03:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.031 07:03:07 -- pm/common@21 -- # date +%s 00:02:33.031 07:03:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.031 07:03:07 -- pm/common@21 -- # date +%s 00:02:33.031 07:03:07 -- pm/common@25 -- # sleep 1 00:02:33.031 07:03:07 -- pm/common@21 -- # date +%s 00:02:33.031 07:03:07 -- pm/common@21 -- # date +%s 00:02:33.031 07:03:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082587 00:02:33.031 07:03:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082587 00:02:33.031 07:03:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082587 00:02:33.031 07:03:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082587 00:02:33.031 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082587_collect-cpu-load.pm.log 00:02:33.031 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082587_collect-vmstat.pm.log 00:02:33.031 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082587_collect-cpu-temp.pm.log 00:02:33.031 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082587_collect-bmc-pm.bmc.pm.log 00:02:33.972 07:03:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:33.972 07:03:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:33.972 07:03:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:33.972 07:03:08 -- common/autotest_common.sh@10 -- # set +x 00:02:33.972 07:03:08 -- spdk/autotest.sh@59 -- # create_test_list 00:02:33.972 07:03:08 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:33.972 07:03:08 -- common/autotest_common.sh@10 -- # set +x 00:02:33.972 07:03:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:33.972 07:03:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.972 07:03:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.972 07:03:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:33.972 07:03:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.972 07:03:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:33.972 07:03:08 -- common/autotest_common.sh@1455 -- # uname 00:02:33.972 07:03:08 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:33.972 07:03:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:33.972 07:03:08 -- common/autotest_common.sh@1475 -- # uname 00:02:33.972 07:03:08 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:33.972 07:03:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:33.972 07:03:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:34.232 lcov: LCOV version 1.15 00:02:34.232 07:03:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:56.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:56.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:04.337 07:03:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:04.337 07:03:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:04.337 07:03:39 -- common/autotest_common.sh@10 -- # set +x 00:03:04.337 07:03:39 -- spdk/autotest.sh@78 -- # rm -f 00:03:04.337 07:03:39 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.538 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:08.538 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:08.538 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:08.799 07:03:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:08.799 07:03:43 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:08.799 07:03:43 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:08.799 07:03:43 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:08.799 07:03:43 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:08.799 07:03:43 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:08.799 07:03:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:08.799 07:03:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.799 07:03:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:08.799 07:03:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:08.799 07:03:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:08.799 07:03:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:08.799 07:03:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:08.799 07:03:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:08.799 07:03:43 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:09.059 No valid GPT data, bailing 00:03:09.059 07:03:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:09.059 07:03:43 -- scripts/common.sh@394 -- # pt= 00:03:09.059 07:03:43 -- scripts/common.sh@395 -- # return 1 00:03:09.059 07:03:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:09.059 1+0 records in 00:03:09.059 1+0 records out 00:03:09.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436532 s, 240 MB/s 00:03:09.059 07:03:43 -- spdk/autotest.sh@105 -- # sync 00:03:09.059 07:03:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:09.059 07:03:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:09.059 07:03:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:19.053 07:03:52 -- spdk/autotest.sh@111 -- # uname -s 00:03:19.053 07:03:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:19.053 07:03:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:19.053 07:03:52 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:21.706 Hugepages 00:03:21.706 node hugesize free / total 00:03:21.706 node0 1048576kB 0 / 0 00:03:21.706 node0 2048kB 0 / 0 00:03:21.706 node1 1048576kB 0 / 0 00:03:21.706 node1 2048kB 0 / 0 00:03:21.706 00:03:21.706 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.706 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:21.706 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:21.706 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:21.706 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:21.706 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:21.706 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:21.706 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:21.706 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:21.706 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:21.706 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:21.706 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:21.706 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:21.706 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:21.706 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:21.706 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:21.706 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:21.706 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:21.706 07:03:56 -- spdk/autotest.sh@117 -- # uname -s 00:03:21.706 07:03:56 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:21.707 07:03:56 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:21.707 07:03:56 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.911 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:25.911 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:27.295 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:27.867 07:04:02 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:28.809 07:04:03 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:28.809 07:04:03 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:28.809 07:04:03 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:28.809 07:04:03 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:28.809 07:04:03 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:28.809 07:04:03 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:28.809 07:04:03 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:28.809 07:04:03 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:28.809 07:04:03 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:28.809 07:04:03 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:28.809 07:04:03 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:28.809 07:04:03 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.018 Waiting for block devices as requested 00:03:33.018 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:33.018 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:33.018 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:33.018 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:33.018 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:33.018 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:33.018 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:33.018 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:33.279 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:33.279 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:33.539 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:33.539 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:33.539 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:33.539 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:33.799 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:33.799 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:33.799 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:34.060 07:04:08 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:34.060 07:04:08 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:34.060 07:04:08 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:34.060 07:04:08 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:34.060 07:04:08 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:34.060 07:04:08 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:34.060 07:04:08 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:34.060 07:04:08 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:34.060 07:04:08 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:34.060 07:04:08 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:34.319 07:04:08 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:34.319 07:04:08 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:34.319 07:04:08 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:34.319 07:04:08 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:34.319 07:04:08 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:34.319 07:04:08 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:34.319 07:04:08 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:34.319 07:04:08 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:34.319 07:04:08 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:34.319 07:04:08 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:34.319 07:04:08 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:34.319 07:04:08 -- common/autotest_common.sh@1541 -- # continue 00:03:34.319 07:04:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:34.319 07:04:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:34.319 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:03:34.319 07:04:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:34.319 07:04:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:34.319 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:03:34.319 07:04:08 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.522 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:38.522 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:38.522 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:38.522 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:38.522 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:38.522 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:38.522 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:38.522 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:38.522 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:38.522 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:38.522 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:38.522 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:38.522 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:38.523 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:38.523 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:38.523 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:38.523 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:38.812 07:04:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:38.812 07:04:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:38.812 07:04:13 -- common/autotest_common.sh@10 -- # set +x 00:03:38.812 07:04:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:38.812 07:04:13 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:38.812 07:04:13 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:38.812 07:04:13 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:38.812 07:04:13 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:38.812 07:04:13 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:38.812 07:04:13 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:38.812 07:04:13 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:38.812 07:04:13 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:38.812 07:04:13 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:38.812 07:04:13 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:38.812 07:04:13 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:38.812 07:04:13 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:38.812 07:04:13 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:38.812 07:04:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:38.812 07:04:13 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:38.812 07:04:13 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:38.812 07:04:13 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:38.812 07:04:13 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:38.812 07:04:13 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:38.812 07:04:13 -- common/autotest_common.sh@1570 -- # return 0 00:03:38.812 07:04:13 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:38.812 07:04:13 -- common/autotest_common.sh@1578 -- # return 0 00:03:38.812 07:04:13 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:38.812 07:04:13 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:38.812 07:04:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:38.812 07:04:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:38.812 07:04:13 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:38.812 07:04:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:38.812 07:04:13 -- common/autotest_common.sh@10 -- # set +x 00:03:38.812 07:04:13 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:38.812 07:04:13 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:38.812 07:04:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:38.812 07:04:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:38.812 07:04:13 -- common/autotest_common.sh@10 -- # set +x 00:03:38.812 ************************************ 00:03:38.812 START TEST env 00:03:38.812 ************************************ 00:03:38.812 07:04:13 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:39.073 * Looking for test storage... 00:03:39.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:39.073 07:04:13 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:39.073 07:04:13 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:39.073 07:04:13 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:39.073 07:04:13 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:39.073 07:04:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.073 07:04:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.073 07:04:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.073 07:04:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.073 07:04:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.073 07:04:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.073 07:04:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.073 07:04:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.073 07:04:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.073 07:04:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.073 07:04:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.073 07:04:13 env -- scripts/common.sh@344 -- # case "$op" in 00:03:39.073 07:04:13 env -- scripts/common.sh@345 -- # : 1 00:03:39.073 07:04:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.073 07:04:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.073 07:04:13 env -- scripts/common.sh@365 -- # decimal 1 00:03:39.073 07:04:13 env -- scripts/common.sh@353 -- # local d=1 00:03:39.073 07:04:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.073 07:04:13 env -- scripts/common.sh@355 -- # echo 1 00:03:39.073 07:04:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.073 07:04:13 env -- scripts/common.sh@366 -- # decimal 2 00:03:39.073 07:04:13 env -- scripts/common.sh@353 -- # local d=2 00:03:39.073 07:04:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.073 07:04:13 env -- scripts/common.sh@355 -- # echo 2 00:03:39.073 07:04:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.073 07:04:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.073 07:04:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.073 07:04:13 env -- scripts/common.sh@368 -- # return 0 00:03:39.073 07:04:13 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.073 07:04:13 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:39.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.073 --rc genhtml_branch_coverage=1 00:03:39.073 --rc genhtml_function_coverage=1 00:03:39.073 --rc genhtml_legend=1 00:03:39.073 --rc geninfo_all_blocks=1 00:03:39.073 --rc geninfo_unexecuted_blocks=1 00:03:39.073 00:03:39.073 ' 00:03:39.073 07:04:13 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:39.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.073 --rc genhtml_branch_coverage=1 00:03:39.073 --rc genhtml_function_coverage=1 00:03:39.073 --rc genhtml_legend=1 00:03:39.073 --rc geninfo_all_blocks=1 00:03:39.073 --rc geninfo_unexecuted_blocks=1 00:03:39.073 00:03:39.073 ' 00:03:39.073 07:04:13 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:39.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.073 --rc genhtml_branch_coverage=1 00:03:39.073 --rc genhtml_function_coverage=1 00:03:39.073 --rc genhtml_legend=1 00:03:39.073 --rc geninfo_all_blocks=1 00:03:39.073 --rc geninfo_unexecuted_blocks=1 00:03:39.073 00:03:39.073 ' 00:03:39.073 07:04:13 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:39.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.073 --rc genhtml_branch_coverage=1 00:03:39.073 --rc genhtml_function_coverage=1 00:03:39.073 --rc genhtml_legend=1 00:03:39.073 --rc geninfo_all_blocks=1 00:03:39.073 --rc geninfo_unexecuted_blocks=1 00:03:39.073 00:03:39.073 ' 00:03:39.073 07:04:13 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:39.073 07:04:13 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:39.073 07:04:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:39.073 07:04:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.073 ************************************ 00:03:39.073 START TEST env_memory 00:03:39.073 ************************************ 00:03:39.073 07:04:13 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:39.073 00:03:39.073 00:03:39.073 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.073 http://cunit.sourceforge.net/ 00:03:39.073 00:03:39.073 00:03:39.073 Suite: memory 00:03:39.073 Test: alloc and free memory map ...[2024-11-20 07:04:13.798814] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:39.073 passed 00:03:39.073 Test: mem map translation ...[2024-11-20 07:04:13.824212] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:39.073 [2024-11-20 07:04:13.824234] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:39.073 [2024-11-20 07:04:13.824280] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:39.073 [2024-11-20 07:04:13.824287] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:39.335 passed 00:03:39.335 Test: mem map registration ...[2024-11-20 07:04:13.879490] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:39.335 [2024-11-20 07:04:13.879516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:39.335 passed 00:03:39.335 Test: mem map adjacent registrations ...passed 00:03:39.335 00:03:39.335 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.335 suites 1 1 n/a 0 0 00:03:39.335 tests 4 4 4 0 0 00:03:39.335 asserts 152 152 152 0 n/a 00:03:39.335 00:03:39.335 Elapsed time = 0.194 seconds 00:03:39.335 00:03:39.335 real 0m0.209s 00:03:39.335 user 0m0.199s 00:03:39.335 sys 0m0.009s 00:03:39.335 07:04:13 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:39.335 07:04:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:39.335 ************************************ 00:03:39.335 END TEST env_memory 00:03:39.335 ************************************ 00:03:39.335 07:04:13 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:39.335 07:04:13 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:39.335 07:04:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:39.335 07:04:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.336 ************************************ 00:03:39.336 START TEST env_vtophys 00:03:39.336 ************************************ 00:03:39.336 07:04:14 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:39.336 EAL: lib.eal log level changed from notice to debug 00:03:39.336 EAL: Detected lcore 0 as core 0 on socket 0 00:03:39.336 EAL: Detected lcore 1 as core 1 on socket 0 00:03:39.336 EAL: Detected lcore 2 as core 2 on socket 0 00:03:39.336 EAL: Detected lcore 3 as core 3 on socket 0 00:03:39.336 EAL: Detected lcore 4 as core 4 on socket 0 00:03:39.336 EAL: Detected lcore 5 as core 5 on socket 0 00:03:39.336 EAL: Detected lcore 6 as core 6 on socket 0 00:03:39.336 EAL: Detected lcore 7 as core 7 on socket 0 00:03:39.336 EAL: Detected lcore 8 as core 8 on socket 0 00:03:39.336 EAL: Detected lcore 9 as core 9 on socket 0 00:03:39.336 EAL: Detected lcore 10 as core 10 on socket 0 00:03:39.336 EAL: Detected lcore 11 as core 11 on socket 0 00:03:39.336 EAL: Detected lcore 12 as core 12 on socket 0 00:03:39.336 EAL: Detected lcore 13 as core 13 on socket 0 00:03:39.336 EAL: Detected lcore 14 as core 14 on socket 0 00:03:39.336 EAL: Detected lcore 15 as core 15 on socket 0 00:03:39.336 EAL: Detected lcore 16 as core 16 on socket 0 00:03:39.336 EAL: Detected lcore 17 as core 17 on socket 0 00:03:39.336 EAL: Detected lcore 18 as core 18 on socket 0 00:03:39.336 EAL: Detected lcore 19 as core 19 on socket 0 00:03:39.336 EAL: Detected lcore 20 as core 20 on socket 0 00:03:39.336 EAL: Detected lcore 21 as core 21 on socket 0 00:03:39.336 EAL: Detected lcore 22 as core 22 on socket 0 00:03:39.336 EAL: Detected lcore 23 as core 23 on socket 0 00:03:39.336 EAL: Detected lcore 24 as core 24 on socket 0 00:03:39.336 EAL: Detected lcore 25 as core 25 on socket 0 00:03:39.336 EAL: Detected lcore 26 as core 26 on socket 0 00:03:39.336 EAL: Detected lcore 27 as core 27 on socket 0 00:03:39.336 EAL: Detected lcore 28 as core 28 on socket 0 00:03:39.336 EAL: Detected lcore 29 as core 29 on socket 0 00:03:39.336 EAL: Detected lcore 30 as core 30 on socket 0 00:03:39.336 EAL: Detected lcore 31 as core 31 on socket 0 00:03:39.336 EAL: Detected lcore 32 as core 32 on socket 0 00:03:39.336 EAL: Detected lcore 33 as core 33 on socket 0 00:03:39.336 EAL: Detected lcore 34 as core 34 on socket 0 00:03:39.336 EAL: Detected lcore 35 as core 35 on socket 0 00:03:39.336 EAL: Detected lcore 36 as core 0 on socket 1 00:03:39.336 EAL: Detected lcore 37 as core 1 on socket 1 00:03:39.336 EAL: Detected lcore 38 as core 2 on socket 1 00:03:39.336 EAL: Detected lcore 39 as core 3 on socket 1 00:03:39.336 EAL: Detected lcore 40 as core 4 on socket 1 00:03:39.336 EAL: Detected lcore 41 as core 5 on socket 1 00:03:39.336 EAL: Detected lcore 42 as core 6 on socket 1 00:03:39.336 EAL: Detected lcore 43 as core 7 on socket 1 00:03:39.336 EAL: Detected lcore 44 as core 8 on socket 1 00:03:39.336 EAL: Detected lcore 45 as core 9 on socket 1 00:03:39.336 EAL: Detected lcore 46 as core 10 on socket 1 00:03:39.336 EAL: Detected lcore 47 as core 11 on socket 1 00:03:39.336 EAL: Detected lcore 48 as core 12 on socket 1 00:03:39.336 EAL: Detected lcore 49 as core 13 on socket 1 00:03:39.336 EAL: Detected lcore 50 as core 14 on socket 1 00:03:39.336 EAL: Detected lcore 51 as core 15 on socket 1 00:03:39.336 EAL: Detected lcore 52 as core 16 on socket 1 00:03:39.336 EAL: Detected lcore 53 as core 17 on socket 1 00:03:39.336 EAL: Detected lcore 54 as core 18 on socket 1 00:03:39.336 EAL: Detected lcore 55 as core 19 on socket 1 00:03:39.336 EAL: Detected lcore 56 as core 20 on socket 1 00:03:39.336 EAL: Detected lcore 57 as core 21 on socket 1 00:03:39.336 EAL: Detected lcore 58 as core 22 on socket 1 00:03:39.336 EAL: Detected lcore 59 as core 23 on socket 1 00:03:39.336 EAL: Detected lcore 60 as core 24 on socket 1 00:03:39.336 EAL: Detected lcore 61 as core 25 on socket 1 00:03:39.336 EAL: Detected lcore 62 as core 26 on socket 1 00:03:39.336 EAL: Detected lcore 63 as core 27 on socket 1 00:03:39.336 EAL: Detected lcore 64 as core 28 on socket 1 00:03:39.336 EAL: Detected lcore 65 as core 29 on socket 1 00:03:39.336 EAL: Detected lcore 66 as core 30 on socket 1 00:03:39.336 EAL: Detected lcore 67 as core 31 on socket 1 00:03:39.336 EAL: Detected lcore 68 as core 32 on socket 1 00:03:39.336 EAL: Detected lcore 69 as core 33 on socket 1 00:03:39.336 EAL: Detected lcore 70 as core 34 on socket 1 00:03:39.336 EAL: Detected lcore 71 as core 35 on socket 1 00:03:39.336 EAL: Detected lcore 72 as core 0 on socket 0 00:03:39.336 EAL: Detected lcore 73 as core 1 on socket 0 00:03:39.336 EAL: Detected lcore 74 as core 2 on socket 0 00:03:39.336 EAL: Detected lcore 75 as core 3 on socket 0 00:03:39.336 EAL: Detected lcore 76 as core 4 on socket 0 00:03:39.336 EAL: Detected lcore 77 as core 5 on socket 0 00:03:39.336 EAL: Detected lcore 78 as core 6 on socket 0 00:03:39.336 EAL: Detected lcore 79 as core 7 on socket 0 00:03:39.336 EAL: Detected lcore 80 as core 8 on socket 0 00:03:39.336 EAL: Detected lcore 81 as core 9 on socket 0 00:03:39.336 EAL: Detected lcore 82 as core 10 on socket 0 00:03:39.336 EAL: Detected lcore 83 as core 11 on socket 0 00:03:39.336 EAL: Detected lcore 84 as core 12 on socket 0 00:03:39.336 EAL: Detected lcore 85 as core 13 on socket 0 00:03:39.336 EAL: Detected lcore 86 as core 14 on socket 0 00:03:39.336 EAL: Detected lcore 87 as core 15 on socket 0 00:03:39.336 EAL: Detected lcore 88 as core 16 on socket 0 00:03:39.336 EAL: Detected lcore 89 as core 17 on socket 0 00:03:39.336 EAL: Detected lcore 90 as core 18 on socket 0 00:03:39.336 EAL: Detected lcore 91 as core 19 on socket 0 00:03:39.336 EAL: Detected lcore 92 as core 20 on socket 0 00:03:39.336 EAL: Detected lcore 93 as core 21 on socket 0 00:03:39.336 EAL: Detected lcore 94 as core 22 on socket 0 00:03:39.336 EAL: Detected lcore 95 as core 23 on socket 0 00:03:39.336 EAL: Detected lcore 96 as core 24 on socket 0 00:03:39.336 EAL: Detected lcore 97 as core 25 on socket 0 00:03:39.336 EAL: Detected lcore 98 as core 26 on socket 0 00:03:39.336 EAL: Detected lcore 99 as core 27 on socket 0 00:03:39.336 EAL: Detected lcore 100 as core 28 on socket 0 00:03:39.336 EAL: Detected lcore 101 as core 29 on socket 0 00:03:39.336 EAL: Detected lcore 102 as core 30 on socket 0 00:03:39.336 EAL: Detected lcore 103 as core 31 on socket 0 00:03:39.336 EAL: Detected lcore 104 as core 32 on socket 0 00:03:39.336 EAL: Detected lcore 105 as core 33 on socket 0 00:03:39.336 EAL: Detected lcore 106 as core 34 on socket 0 00:03:39.336 EAL: Detected lcore 107 as core 35 on socket 0 00:03:39.336 EAL: Detected lcore 108 as core 0 on socket 1 00:03:39.336 EAL: Detected lcore 109 as core 1 on socket 1 00:03:39.336 EAL: Detected lcore 110 as core 2 on socket 1 00:03:39.336 EAL: Detected lcore 111 as core 3 on socket 1 00:03:39.336 EAL: Detected lcore 112 as core 4 on socket 1 00:03:39.336 EAL: Detected lcore 113 as core 5 on socket 1 00:03:39.336 EAL: Detected lcore 114 as core 6 on socket 1 00:03:39.336 EAL: Detected lcore 115 as core 7 on socket 1 00:03:39.336 EAL: Detected lcore 116 as core 8 on socket 1 00:03:39.336 EAL: Detected lcore 117 as core 9 on socket 1 00:03:39.336 EAL: Detected lcore 118 as core 10 on socket 1 00:03:39.336 EAL: Detected lcore 119 as core 11 on socket 1 00:03:39.336 EAL: Detected lcore 120 as core 12 on socket 1 00:03:39.336 EAL: Detected lcore 121 as core 13 on socket 1 00:03:39.336 EAL: Detected lcore 122 as core 14 on socket 1 00:03:39.336 EAL: Detected lcore 123 as core 15 on socket 1 00:03:39.336 EAL: Detected lcore 124 as core 16 on socket 1 00:03:39.336 EAL: Detected lcore 125 as core 17 on socket 1 00:03:39.336 EAL: Detected lcore 126 as core 18 on socket 1 00:03:39.336 EAL: Detected lcore 127 as core 19 on socket 1 00:03:39.336 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:39.336 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:39.336 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:39.336 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:39.336 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:39.336 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:39.336 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:39.336 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:39.336 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:39.336 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:39.336 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:39.336 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:39.336 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:39.336 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:39.336 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:39.336 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:39.336 EAL: Maximum logical cores by configuration: 128 00:03:39.336 EAL: Detected CPU lcores: 128 00:03:39.336 EAL: Detected NUMA nodes: 2 00:03:39.336 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:39.336 EAL: Detected shared linkage of DPDK 00:03:39.336 EAL: No shared files mode enabled, IPC will be disabled 00:03:39.336 EAL: Bus pci wants IOVA as 'DC' 00:03:39.336 EAL: Buses did not request a specific IOVA mode. 00:03:39.336 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:39.336 EAL: Selected IOVA mode 'VA' 00:03:39.336 EAL: Probing VFIO support... 00:03:39.336 EAL: IOMMU type 1 (Type 1) is supported 00:03:39.336 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:39.336 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:39.336 EAL: VFIO support initialized 00:03:39.336 EAL: Ask a virtual area of 0x2e000 bytes 00:03:39.336 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:39.336 EAL: Setting up physically contiguous memory... 00:03:39.336 EAL: Setting maximum number of open files to 524288 00:03:39.336 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:39.336 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:39.336 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:39.336 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.336 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:39.336 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.336 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.336 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:39.336 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:39.336 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.336 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:39.336 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.336 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.336 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:39.336 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:39.337 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.337 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:39.337 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.337 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.337 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:39.337 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:39.337 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.337 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:39.337 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:39.337 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.337 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:39.337 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:39.337 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:39.337 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.337 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:39.337 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.337 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.337 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:39.337 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:39.337 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.337 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:39.337 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.337 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.337 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:39.337 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:39.337 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.337 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:39.337 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.337 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.337 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:39.337 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:39.337 EAL: Ask a virtual area of 0x61000 bytes 00:03:39.337 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:39.337 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:39.337 EAL: Ask a virtual area of 0x400000000 bytes 00:03:39.337 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:39.337 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:39.337 EAL: Hugepages will be freed exactly as allocated. 00:03:39.337 EAL: No shared files mode enabled, IPC is disabled 00:03:39.337 EAL: No shared files mode enabled, IPC is disabled 00:03:39.337 EAL: TSC frequency is ~2400000 KHz 00:03:39.337 EAL: Main lcore 0 is ready (tid=7f5b780bba00;cpuset=[0]) 00:03:39.337 EAL: Trying to obtain current memory policy. 00:03:39.337 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.337 EAL: Restoring previous memory policy: 0 00:03:39.337 EAL: request: mp_malloc_sync 00:03:39.337 EAL: No shared files mode enabled, IPC is disabled 00:03:39.337 EAL: Heap on socket 0 was expanded by 2MB 00:03:39.337 EAL: No shared files mode enabled, IPC is disabled 00:03:39.597 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:39.597 EAL: Mem event callback 'spdk:(nil)' registered 00:03:39.597 00:03:39.597 00:03:39.597 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.597 http://cunit.sourceforge.net/ 00:03:39.597 00:03:39.597 00:03:39.597 Suite: components_suite 00:03:39.597 Test: vtophys_malloc_test ...passed 00:03:39.597 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:39.597 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.597 EAL: Restoring previous memory policy: 4 00:03:39.597 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.597 EAL: request: mp_malloc_sync 00:03:39.597 EAL: No shared files mode enabled, IPC is disabled 00:03:39.597 EAL: Heap on socket 0 was expanded by 4MB 00:03:39.597 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.597 EAL: request: mp_malloc_sync 00:03:39.597 EAL: No shared files mode enabled, IPC is disabled 00:03:39.597 EAL: Heap on socket 0 was shrunk by 4MB 00:03:39.597 EAL: Trying to obtain current memory policy. 00:03:39.597 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.597 EAL: Restoring previous memory policy: 4 00:03:39.597 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.597 EAL: request: mp_malloc_sync 00:03:39.597 EAL: No shared files mode enabled, IPC is disabled 00:03:39.597 EAL: Heap on socket 0 was expanded by 6MB 00:03:39.597 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.597 EAL: request: mp_malloc_sync 00:03:39.597 EAL: No shared files mode enabled, IPC is disabled 00:03:39.597 EAL: Heap on socket 0 was shrunk by 6MB 00:03:39.597 EAL: Trying to obtain current memory policy. 00:03:39.597 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.597 EAL: Restoring previous memory policy: 4 00:03:39.597 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.597 EAL: request: mp_malloc_sync 00:03:39.597 EAL: No shared files mode enabled, IPC is disabled 00:03:39.598 EAL: Heap on socket 0 was expanded by 10MB 00:03:39.598 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.598 EAL: request: mp_malloc_sync 00:03:39.598 EAL: No shared files mode enabled, IPC is disabled 00:03:39.598 EAL: Heap on socket 0 was shrunk by 10MB 00:03:39.598 EAL: Trying to obtain current memory policy. 00:03:39.598 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.598 EAL: Restoring previous memory policy: 4 00:03:39.598 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.598 EAL: request: mp_malloc_sync 00:03:39.598 EAL: No shared files mode enabled, IPC is disabled 00:03:39.598 EAL: Heap on socket 0 was expanded by 18MB 00:03:39.598 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.598 EAL: request: mp_malloc_sync 00:03:39.598 EAL: No shared files mode enabled, IPC is disabled 00:03:39.598 EAL: Heap on socket 0 was shrunk by 18MB 00:03:39.598 EAL: Trying to obtain current memory policy. 00:03:39.598 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.598 EAL: Restoring previous memory policy: 4 00:03:39.598 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.598 EAL: request: mp_malloc_sync 00:03:39.598 EAL: No shared files mode enabled, IPC is disabled 00:03:39.598 EAL: Heap on socket 0 was expanded by 34MB 00:03:39.598 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.598 EAL: request: mp_malloc_sync 00:03:39.598 EAL: No shared files mode enabled, IPC is disabled 00:03:39.598 EAL: Heap on socket 0 was shrunk by 34MB 00:03:39.598 EAL: Trying to obtain current memory policy. 00:03:39.598 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.598 EAL: Restoring previous memory policy: 4 00:03:39.598 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.598 EAL: request: mp_malloc_sync 00:03:39.598 EAL: No shared files mode enabled, IPC is disabled 00:03:39.598 EAL: Heap on socket 0 was expanded by 66MB 00:03:39.598 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.598 EAL: request: mp_malloc_sync 00:03:39.598 EAL: No shared files mode enabled, IPC is disabled 00:03:39.598 EAL: Heap on socket 0 was shrunk by 66MB 00:03:39.598 EAL: Trying to obtain current memory policy. 00:03:39.598 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.598 EAL: Restoring previous memory policy: 4 00:03:39.598 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.598 EAL: request: mp_malloc_sync 00:03:39.598 EAL: No shared files mode enabled, IPC is disabled 00:03:39.598 EAL: Heap on socket 0 was expanded by 130MB 00:03:39.598 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.598 EAL: request: mp_malloc_sync 00:03:39.598 EAL: No shared files mode enabled, IPC is disabled 00:03:39.598 EAL: Heap on socket 0 was shrunk by 130MB 00:03:39.598 EAL: Trying to obtain current memory policy. 00:03:39.598 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.598 EAL: Restoring previous memory policy: 4 00:03:39.598 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.598 EAL: request: mp_malloc_sync 00:03:39.598 EAL: No shared files mode enabled, IPC is disabled 00:03:39.598 EAL: Heap on socket 0 was expanded by 258MB 00:03:39.598 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.598 EAL: request: mp_malloc_sync 00:03:39.598 EAL: No shared files mode enabled, IPC is disabled 00:03:39.598 EAL: Heap on socket 0 was shrunk by 258MB 00:03:39.598 EAL: Trying to obtain current memory policy. 00:03:39.598 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:39.858 EAL: Restoring previous memory policy: 4 00:03:39.858 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.858 EAL: request: mp_malloc_sync 00:03:39.858 EAL: No shared files mode enabled, IPC is disabled 00:03:39.858 EAL: Heap on socket 0 was expanded by 514MB 00:03:39.858 EAL: Calling mem event callback 'spdk:(nil)' 00:03:39.858 EAL: request: mp_malloc_sync 00:03:39.858 EAL: No shared files mode enabled, IPC is disabled 00:03:39.858 EAL: Heap on socket 0 was shrunk by 514MB 00:03:39.858 EAL: Trying to obtain current memory policy. 00:03:39.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.120 EAL: Restoring previous memory policy: 4 00:03:40.120 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.120 EAL: request: mp_malloc_sync 00:03:40.120 EAL: No shared files mode enabled, IPC is disabled 00:03:40.120 EAL: Heap on socket 0 was expanded by 1026MB 00:03:40.120 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.120 EAL: request: mp_malloc_sync 00:03:40.120 EAL: No shared files mode enabled, IPC is disabled 00:03:40.120 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:40.120 passed 00:03:40.120 00:03:40.120 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.120 suites 1 1 n/a 0 0 00:03:40.120 tests 2 2 2 0 0 00:03:40.120 asserts 497 497 497 0 n/a 00:03:40.120 00:03:40.120 Elapsed time = 0.656 seconds 00:03:40.120 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.120 EAL: request: mp_malloc_sync 00:03:40.120 EAL: No shared files mode enabled, IPC is disabled 00:03:40.120 EAL: Heap on socket 0 was shrunk by 2MB 00:03:40.120 EAL: No shared files mode enabled, IPC is disabled 00:03:40.120 EAL: No shared files mode enabled, IPC is disabled 00:03:40.120 EAL: No shared files mode enabled, IPC is disabled 00:03:40.120 00:03:40.120 real 0m0.798s 00:03:40.120 user 0m0.415s 00:03:40.120 sys 0m0.356s 00:03:40.120 07:04:14 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:40.120 07:04:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:40.120 ************************************ 00:03:40.120 END TEST env_vtophys 00:03:40.120 ************************************ 00:03:40.120 07:04:14 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:40.120 07:04:14 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:40.120 07:04:14 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:40.120 07:04:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.381 ************************************ 00:03:40.381 START TEST env_pci 00:03:40.381 ************************************ 00:03:40.381 07:04:14 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:40.381 00:03:40.381 00:03:40.381 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.381 http://cunit.sourceforge.net/ 00:03:40.381 00:03:40.381 00:03:40.381 Suite: pci 00:03:40.381 Test: pci_hook ...[2024-11-20 07:04:14.926540] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1025184 has claimed it 00:03:40.381 EAL: Cannot find device (10000:00:01.0) 00:03:40.381 EAL: Failed to attach device on primary process 00:03:40.381 passed 00:03:40.381 00:03:40.381 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.381 suites 1 1 n/a 0 0 00:03:40.381 tests 1 1 1 0 0 00:03:40.381 asserts 25 25 25 0 n/a 00:03:40.381 00:03:40.381 Elapsed time = 0.034 seconds 00:03:40.381 00:03:40.381 real 0m0.055s 00:03:40.381 user 0m0.012s 00:03:40.381 sys 0m0.043s 00:03:40.381 07:04:14 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:40.381 07:04:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:40.381 ************************************ 00:03:40.381 END TEST env_pci 00:03:40.381 ************************************ 00:03:40.381 07:04:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:40.381 07:04:15 env -- env/env.sh@15 -- # uname 00:03:40.381 07:04:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:40.381 07:04:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:40.381 07:04:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:40.381 07:04:15 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:40.381 07:04:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:40.381 07:04:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.381 ************************************ 00:03:40.381 START TEST env_dpdk_post_init 00:03:40.381 ************************************ 00:03:40.381 07:04:15 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:40.381 EAL: Detected CPU lcores: 128 00:03:40.381 EAL: Detected NUMA nodes: 2 00:03:40.381 EAL: Detected shared linkage of DPDK 00:03:40.381 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:40.381 EAL: Selected IOVA mode 'VA' 00:03:40.381 EAL: VFIO support initialized 00:03:40.381 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:40.642 EAL: Using IOMMU type 1 (Type 1) 00:03:40.642 EAL: Ignore mapping IO port bar(1) 00:03:40.904 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:40.904 EAL: Ignore mapping IO port bar(1) 00:03:40.904 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:41.166 EAL: Ignore mapping IO port bar(1) 00:03:41.166 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:41.426 EAL: Ignore mapping IO port bar(1) 00:03:41.426 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:41.687 EAL: Ignore mapping IO port bar(1) 00:03:41.687 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:41.687 EAL: Ignore mapping IO port bar(1) 00:03:41.948 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:41.948 EAL: Ignore mapping IO port bar(1) 00:03:42.209 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:42.209 EAL: Ignore mapping IO port bar(1) 00:03:42.469 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:42.470 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:42.730 EAL: Ignore mapping IO port bar(1) 00:03:42.730 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:42.990 EAL: Ignore mapping IO port bar(1) 00:03:42.990 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:43.251 EAL: Ignore mapping IO port bar(1) 00:03:43.251 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:43.251 EAL: Ignore mapping IO port bar(1) 00:03:43.513 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:43.513 EAL: Ignore mapping IO port bar(1) 00:03:43.774 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:43.774 EAL: Ignore mapping IO port bar(1) 00:03:44.036 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:44.036 EAL: Ignore mapping IO port bar(1) 00:03:44.036 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:44.297 EAL: Ignore mapping IO port bar(1) 00:03:44.297 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:44.297 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:44.297 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:44.559 Starting DPDK initialization... 00:03:44.559 Starting SPDK post initialization... 00:03:44.559 SPDK NVMe probe 00:03:44.559 Attaching to 0000:65:00.0 00:03:44.559 Attached to 0000:65:00.0 00:03:44.559 Cleaning up... 00:03:46.477 00:03:46.477 real 0m5.747s 00:03:46.477 user 0m0.103s 00:03:46.477 sys 0m0.188s 00:03:46.477 07:04:20 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:46.477 07:04:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:46.477 ************************************ 00:03:46.477 END TEST env_dpdk_post_init 00:03:46.477 ************************************ 00:03:46.477 07:04:20 env -- env/env.sh@26 -- # uname 00:03:46.477 07:04:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:46.477 07:04:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:46.477 07:04:20 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:46.477 07:04:20 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:46.477 07:04:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.477 ************************************ 00:03:46.477 START TEST env_mem_callbacks 00:03:46.477 ************************************ 00:03:46.477 07:04:20 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:46.477 EAL: Detected CPU lcores: 128 00:03:46.477 EAL: Detected NUMA nodes: 2 00:03:46.477 EAL: Detected shared linkage of DPDK 00:03:46.477 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:46.477 EAL: Selected IOVA mode 'VA' 00:03:46.477 EAL: VFIO support initialized 00:03:46.477 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:46.477 00:03:46.477 00:03:46.477 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.477 http://cunit.sourceforge.net/ 00:03:46.477 00:03:46.477 00:03:46.477 Suite: memory 00:03:46.477 Test: test ... 00:03:46.477 register 0x200000200000 2097152 00:03:46.477 malloc 3145728 00:03:46.477 register 0x200000400000 4194304 00:03:46.477 buf 0x200000500000 len 3145728 PASSED 00:03:46.477 malloc 64 00:03:46.477 buf 0x2000004fff40 len 64 PASSED 00:03:46.477 malloc 4194304 00:03:46.477 register 0x200000800000 6291456 00:03:46.477 buf 0x200000a00000 len 4194304 PASSED 00:03:46.477 free 0x200000500000 3145728 00:03:46.477 free 0x2000004fff40 64 00:03:46.477 unregister 0x200000400000 4194304 PASSED 00:03:46.477 free 0x200000a00000 4194304 00:03:46.477 unregister 0x200000800000 6291456 PASSED 00:03:46.477 malloc 8388608 00:03:46.477 register 0x200000400000 10485760 00:03:46.477 buf 0x200000600000 len 8388608 PASSED 00:03:46.477 free 0x200000600000 8388608 00:03:46.477 unregister 0x200000400000 10485760 PASSED 00:03:46.477 passed 00:03:46.477 00:03:46.477 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.477 suites 1 1 n/a 0 0 00:03:46.477 tests 1 1 1 0 0 00:03:46.477 asserts 15 15 15 0 n/a 00:03:46.477 00:03:46.477 Elapsed time = 0.005 seconds 00:03:46.477 00:03:46.477 real 0m0.064s 00:03:46.477 user 0m0.019s 00:03:46.477 sys 0m0.045s 00:03:46.477 07:04:20 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:46.477 07:04:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:46.477 ************************************ 00:03:46.477 END TEST env_mem_callbacks 00:03:46.477 ************************************ 00:03:46.477 00:03:46.477 real 0m7.456s 00:03:46.477 user 0m1.006s 00:03:46.477 sys 0m0.996s 00:03:46.477 07:04:20 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:46.477 07:04:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.477 ************************************ 00:03:46.477 END TEST env 00:03:46.477 ************************************ 00:03:46.477 07:04:21 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:46.477 07:04:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:46.477 07:04:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:46.477 07:04:21 -- common/autotest_common.sh@10 -- # set +x 00:03:46.477 ************************************ 00:03:46.477 START TEST rpc 00:03:46.477 ************************************ 00:03:46.477 07:04:21 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:46.477 * Looking for test storage... 00:03:46.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:46.477 07:04:21 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:46.477 07:04:21 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:46.477 07:04:21 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:46.738 07:04:21 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:46.738 07:04:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.738 07:04:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.738 07:04:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.738 07:04:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.738 07:04:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.738 07:04:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.738 07:04:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.738 07:04:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.738 07:04:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.738 07:04:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.738 07:04:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.738 07:04:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:46.738 07:04:21 rpc -- scripts/common.sh@345 -- # : 1 00:03:46.738 07:04:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.738 07:04:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.738 07:04:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:46.738 07:04:21 rpc -- scripts/common.sh@353 -- # local d=1 00:03:46.738 07:04:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.738 07:04:21 rpc -- scripts/common.sh@355 -- # echo 1 00:03:46.738 07:04:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.738 07:04:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:46.738 07:04:21 rpc -- scripts/common.sh@353 -- # local d=2 00:03:46.738 07:04:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.738 07:04:21 rpc -- scripts/common.sh@355 -- # echo 2 00:03:46.738 07:04:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.738 07:04:21 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.738 07:04:21 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.738 07:04:21 rpc -- scripts/common.sh@368 -- # return 0 00:03:46.738 07:04:21 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.738 07:04:21 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:46.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.738 --rc genhtml_branch_coverage=1 00:03:46.738 --rc genhtml_function_coverage=1 00:03:46.738 --rc genhtml_legend=1 00:03:46.738 --rc geninfo_all_blocks=1 00:03:46.738 --rc geninfo_unexecuted_blocks=1 00:03:46.738 00:03:46.738 ' 00:03:46.738 07:04:21 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:46.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.738 --rc genhtml_branch_coverage=1 00:03:46.738 --rc genhtml_function_coverage=1 00:03:46.738 --rc genhtml_legend=1 00:03:46.738 --rc geninfo_all_blocks=1 00:03:46.738 --rc geninfo_unexecuted_blocks=1 00:03:46.738 00:03:46.738 ' 00:03:46.738 07:04:21 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:46.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.738 --rc genhtml_branch_coverage=1 00:03:46.739 --rc genhtml_function_coverage=1 00:03:46.739 --rc genhtml_legend=1 00:03:46.739 --rc geninfo_all_blocks=1 00:03:46.739 --rc geninfo_unexecuted_blocks=1 00:03:46.739 00:03:46.739 ' 00:03:46.739 07:04:21 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:46.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.739 --rc genhtml_branch_coverage=1 00:03:46.739 --rc genhtml_function_coverage=1 00:03:46.739 --rc genhtml_legend=1 00:03:46.739 --rc geninfo_all_blocks=1 00:03:46.739 --rc geninfo_unexecuted_blocks=1 00:03:46.739 00:03:46.739 ' 00:03:46.739 07:04:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1026637 00:03:46.739 07:04:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:46.739 07:04:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1026637 00:03:46.739 07:04:21 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:46.739 07:04:21 rpc -- common/autotest_common.sh@833 -- # '[' -z 1026637 ']' 00:03:46.739 07:04:21 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:46.739 07:04:21 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:46.739 07:04:21 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:46.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:46.739 07:04:21 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:46.739 07:04:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.739 [2024-11-20 07:04:21.325270] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:03:46.739 [2024-11-20 07:04:21.325342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026637 ] 00:03:46.739 [2024-11-20 07:04:21.407078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.739 [2024-11-20 07:04:21.448428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:46.739 [2024-11-20 07:04:21.448464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1026637' to capture a snapshot of events at runtime. 00:03:46.739 [2024-11-20 07:04:21.448472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:46.739 [2024-11-20 07:04:21.448478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:46.739 [2024-11-20 07:04:21.448484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1026637 for offline analysis/debug. 00:03:46.739 [2024-11-20 07:04:21.449088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.679 07:04:22 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:47.680 07:04:22 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:47.680 07:04:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:47.680 07:04:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:47.680 07:04:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:47.680 07:04:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:47.680 07:04:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:47.680 07:04:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:47.680 07:04:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.680 ************************************ 00:03:47.680 START TEST rpc_integrity 00:03:47.680 ************************************ 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.680 { 00:03:47.680 "name": "Malloc0", 00:03:47.680 "aliases": [ 00:03:47.680 "6ec319ee-9cb0-4496-833a-fc3f9850bb1f" 00:03:47.680 ], 00:03:47.680 "product_name": "Malloc disk", 00:03:47.680 "block_size": 512, 00:03:47.680 "num_blocks": 16384, 00:03:47.680 "uuid": "6ec319ee-9cb0-4496-833a-fc3f9850bb1f", 00:03:47.680 "assigned_rate_limits": { 00:03:47.680 "rw_ios_per_sec": 0, 00:03:47.680 "rw_mbytes_per_sec": 0, 00:03:47.680 "r_mbytes_per_sec": 0, 00:03:47.680 "w_mbytes_per_sec": 0 00:03:47.680 }, 00:03:47.680 "claimed": false, 00:03:47.680 "zoned": false, 00:03:47.680 "supported_io_types": { 00:03:47.680 "read": true, 00:03:47.680 "write": true, 00:03:47.680 "unmap": true, 00:03:47.680 "flush": true, 00:03:47.680 "reset": true, 00:03:47.680 "nvme_admin": false, 00:03:47.680 "nvme_io": false, 00:03:47.680 "nvme_io_md": false, 00:03:47.680 "write_zeroes": true, 00:03:47.680 "zcopy": true, 00:03:47.680 "get_zone_info": false, 00:03:47.680 "zone_management": false, 00:03:47.680 "zone_append": false, 00:03:47.680 "compare": false, 00:03:47.680 "compare_and_write": false, 00:03:47.680 "abort": true, 00:03:47.680 "seek_hole": false, 00:03:47.680 "seek_data": false, 00:03:47.680 "copy": true, 00:03:47.680 "nvme_iov_md": false 00:03:47.680 }, 00:03:47.680 "memory_domains": [ 00:03:47.680 { 00:03:47.680 "dma_device_id": "system", 00:03:47.680 "dma_device_type": 1 00:03:47.680 }, 00:03:47.680 { 00:03:47.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.680 "dma_device_type": 2 00:03:47.680 } 00:03:47.680 ], 00:03:47.680 "driver_specific": {} 00:03:47.680 } 00:03:47.680 ]' 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.680 [2024-11-20 07:04:22.287066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:47.680 [2024-11-20 07:04:22.287098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.680 [2024-11-20 07:04:22.287110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa6e580 00:03:47.680 [2024-11-20 07:04:22.287118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.680 [2024-11-20 07:04:22.288476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.680 [2024-11-20 07:04:22.288498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.680 Passthru0 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.680 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.680 { 00:03:47.680 "name": "Malloc0", 00:03:47.680 "aliases": [ 00:03:47.680 "6ec319ee-9cb0-4496-833a-fc3f9850bb1f" 00:03:47.680 ], 00:03:47.680 "product_name": "Malloc disk", 00:03:47.680 "block_size": 512, 00:03:47.680 "num_blocks": 16384, 00:03:47.680 "uuid": "6ec319ee-9cb0-4496-833a-fc3f9850bb1f", 00:03:47.680 "assigned_rate_limits": { 00:03:47.680 "rw_ios_per_sec": 0, 00:03:47.680 "rw_mbytes_per_sec": 0, 00:03:47.680 "r_mbytes_per_sec": 0, 00:03:47.680 "w_mbytes_per_sec": 0 00:03:47.680 }, 00:03:47.680 "claimed": true, 00:03:47.680 "claim_type": "exclusive_write", 00:03:47.680 "zoned": false, 00:03:47.680 "supported_io_types": { 00:03:47.680 "read": true, 00:03:47.680 "write": true, 00:03:47.680 "unmap": true, 00:03:47.680 "flush": true, 00:03:47.680 "reset": true, 00:03:47.680 "nvme_admin": false, 00:03:47.680 "nvme_io": false, 00:03:47.680 "nvme_io_md": false, 00:03:47.680 "write_zeroes": true, 00:03:47.680 "zcopy": true, 00:03:47.680 "get_zone_info": false, 00:03:47.680 "zone_management": false, 00:03:47.680 "zone_append": false, 00:03:47.680 "compare": false, 00:03:47.680 "compare_and_write": false, 00:03:47.680 "abort": true, 00:03:47.680 "seek_hole": false, 00:03:47.680 "seek_data": false, 00:03:47.680 "copy": true, 00:03:47.680 "nvme_iov_md": false 00:03:47.680 }, 00:03:47.680 "memory_domains": [ 00:03:47.680 { 00:03:47.680 "dma_device_id": "system", 00:03:47.680 "dma_device_type": 1 00:03:47.680 }, 00:03:47.680 { 00:03:47.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.680 "dma_device_type": 2 00:03:47.680 } 00:03:47.680 ], 00:03:47.680 "driver_specific": {} 00:03:47.680 }, 00:03:47.680 { 00:03:47.680 "name": "Passthru0", 00:03:47.680 "aliases": [ 00:03:47.680 "7b317b1e-5487-59f1-9c87-8ebd7ee60ccf" 00:03:47.680 ], 00:03:47.680 "product_name": "passthru", 00:03:47.680 "block_size": 512, 00:03:47.680 "num_blocks": 16384, 00:03:47.680 "uuid": "7b317b1e-5487-59f1-9c87-8ebd7ee60ccf", 00:03:47.680 "assigned_rate_limits": { 00:03:47.680 "rw_ios_per_sec": 0, 00:03:47.680 "rw_mbytes_per_sec": 0, 00:03:47.680 "r_mbytes_per_sec": 0, 00:03:47.680 "w_mbytes_per_sec": 0 00:03:47.680 }, 00:03:47.680 "claimed": false, 00:03:47.680 "zoned": false, 00:03:47.680 "supported_io_types": { 00:03:47.680 "read": true, 00:03:47.680 "write": true, 00:03:47.680 "unmap": true, 00:03:47.680 "flush": true, 00:03:47.680 "reset": true, 00:03:47.680 "nvme_admin": false, 00:03:47.680 "nvme_io": false, 00:03:47.680 "nvme_io_md": false, 00:03:47.680 "write_zeroes": true, 00:03:47.680 "zcopy": true, 00:03:47.680 "get_zone_info": false, 00:03:47.680 "zone_management": false, 00:03:47.680 "zone_append": false, 00:03:47.680 "compare": false, 00:03:47.680 "compare_and_write": false, 00:03:47.680 "abort": true, 00:03:47.680 "seek_hole": false, 00:03:47.680 "seek_data": false, 00:03:47.680 "copy": true, 00:03:47.680 "nvme_iov_md": false 00:03:47.680 }, 00:03:47.680 "memory_domains": [ 00:03:47.680 { 00:03:47.680 "dma_device_id": "system", 00:03:47.680 "dma_device_type": 1 00:03:47.680 }, 00:03:47.680 { 00:03:47.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.680 "dma_device_type": 2 00:03:47.680 } 00:03:47.680 ], 00:03:47.680 "driver_specific": { 00:03:47.680 "passthru": { 00:03:47.680 "name": "Passthru0", 00:03:47.680 "base_bdev_name": "Malloc0" 00:03:47.680 } 00:03:47.680 } 00:03:47.680 } 00:03:47.680 ]' 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.680 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.681 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.681 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.681 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.681 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:47.681 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.681 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.681 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.681 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.681 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.681 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.681 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.681 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.681 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:47.681 07:04:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.681 00:03:47.681 real 0m0.293s 00:03:47.681 user 0m0.191s 00:03:47.681 sys 0m0.037s 00:03:47.681 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:47.681 07:04:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.681 ************************************ 00:03:47.681 END TEST rpc_integrity 00:03:47.681 ************************************ 00:03:47.942 07:04:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:47.942 07:04:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:47.942 07:04:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:47.942 07:04:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.942 ************************************ 00:03:47.942 START TEST rpc_plugins 00:03:47.942 ************************************ 00:03:47.942 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:47.942 07:04:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:47.942 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.942 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.942 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.942 07:04:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:47.942 07:04:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:47.942 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.942 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.942 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.942 07:04:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:47.942 { 00:03:47.942 "name": "Malloc1", 00:03:47.942 "aliases": [ 00:03:47.942 "3a0c7ed4-69e2-4924-9d0c-97f1e307fb38" 00:03:47.942 ], 00:03:47.942 "product_name": "Malloc disk", 00:03:47.942 "block_size": 4096, 00:03:47.942 "num_blocks": 256, 00:03:47.942 "uuid": "3a0c7ed4-69e2-4924-9d0c-97f1e307fb38", 00:03:47.942 "assigned_rate_limits": { 00:03:47.942 "rw_ios_per_sec": 0, 00:03:47.942 "rw_mbytes_per_sec": 0, 00:03:47.942 "r_mbytes_per_sec": 0, 00:03:47.942 "w_mbytes_per_sec": 0 00:03:47.942 }, 00:03:47.942 "claimed": false, 00:03:47.942 "zoned": false, 00:03:47.942 "supported_io_types": { 00:03:47.942 "read": true, 00:03:47.942 "write": true, 00:03:47.942 "unmap": true, 00:03:47.942 "flush": true, 00:03:47.942 "reset": true, 00:03:47.942 "nvme_admin": false, 00:03:47.942 "nvme_io": false, 00:03:47.942 "nvme_io_md": false, 00:03:47.942 "write_zeroes": true, 00:03:47.942 "zcopy": true, 00:03:47.942 "get_zone_info": false, 00:03:47.942 "zone_management": false, 00:03:47.942 "zone_append": false, 00:03:47.942 "compare": false, 00:03:47.942 "compare_and_write": false, 00:03:47.942 "abort": true, 00:03:47.942 "seek_hole": false, 00:03:47.942 "seek_data": false, 00:03:47.942 "copy": true, 00:03:47.942 "nvme_iov_md": false 00:03:47.942 }, 00:03:47.942 "memory_domains": [ 00:03:47.942 { 00:03:47.942 "dma_device_id": "system", 00:03:47.942 "dma_device_type": 1 00:03:47.942 }, 00:03:47.942 { 00:03:47.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.942 "dma_device_type": 2 00:03:47.942 } 00:03:47.942 ], 00:03:47.942 "driver_specific": {} 00:03:47.942 } 00:03:47.942 ]' 00:03:47.942 07:04:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:47.942 07:04:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:47.942 07:04:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:47.942 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.942 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.942 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.942 07:04:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:47.942 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.942 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.942 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.943 07:04:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:47.943 07:04:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:47.943 07:04:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:47.943 00:03:47.943 real 0m0.153s 00:03:47.943 user 0m0.095s 00:03:47.943 sys 0m0.023s 00:03:47.943 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:47.943 07:04:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.943 ************************************ 00:03:47.943 END TEST rpc_plugins 00:03:47.943 ************************************ 00:03:48.203 07:04:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:48.203 07:04:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.203 07:04:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.203 07:04:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.203 ************************************ 00:03:48.203 START TEST rpc_trace_cmd_test 00:03:48.203 ************************************ 00:03:48.203 07:04:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:48.203 07:04:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:48.203 07:04:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:48.203 07:04:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.203 07:04:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.203 07:04:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.203 07:04:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:48.203 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1026637", 00:03:48.204 "tpoint_group_mask": "0x8", 00:03:48.204 "iscsi_conn": { 00:03:48.204 "mask": "0x2", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "scsi": { 00:03:48.204 "mask": "0x4", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "bdev": { 00:03:48.204 "mask": "0x8", 00:03:48.204 "tpoint_mask": "0xffffffffffffffff" 00:03:48.204 }, 00:03:48.204 "nvmf_rdma": { 00:03:48.204 "mask": "0x10", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "nvmf_tcp": { 00:03:48.204 "mask": "0x20", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "ftl": { 00:03:48.204 "mask": "0x40", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "blobfs": { 00:03:48.204 "mask": "0x80", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "dsa": { 00:03:48.204 "mask": "0x200", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "thread": { 00:03:48.204 "mask": "0x400", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "nvme_pcie": { 00:03:48.204 "mask": "0x800", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "iaa": { 00:03:48.204 "mask": "0x1000", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "nvme_tcp": { 00:03:48.204 "mask": "0x2000", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "bdev_nvme": { 00:03:48.204 "mask": "0x4000", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "sock": { 00:03:48.204 "mask": "0x8000", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "blob": { 00:03:48.204 "mask": "0x10000", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "bdev_raid": { 00:03:48.204 "mask": "0x20000", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 }, 00:03:48.204 "scheduler": { 00:03:48.204 "mask": "0x40000", 00:03:48.204 "tpoint_mask": "0x0" 00:03:48.204 } 00:03:48.204 }' 00:03:48.204 07:04:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:48.204 07:04:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:48.204 07:04:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:48.204 07:04:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:48.204 07:04:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:48.204 07:04:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:48.204 07:04:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:48.204 07:04:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:48.204 07:04:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:48.466 07:04:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:48.466 00:03:48.466 real 0m0.251s 00:03:48.466 user 0m0.210s 00:03:48.466 sys 0m0.032s 00:03:48.466 07:04:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.466 07:04:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.466 ************************************ 00:03:48.466 END TEST rpc_trace_cmd_test 00:03:48.466 ************************************ 00:03:48.466 07:04:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:48.466 07:04:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:48.466 07:04:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:48.466 07:04:23 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.466 07:04:23 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.466 07:04:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.466 ************************************ 00:03:48.466 START TEST rpc_daemon_integrity 00:03:48.466 ************************************ 00:03:48.466 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:48.466 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:48.466 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.466 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.466 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.466 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:48.467 { 00:03:48.467 "name": "Malloc2", 00:03:48.467 "aliases": [ 00:03:48.467 "2bb80090-063d-4cb4-9b9c-b1f4b0904e5a" 00:03:48.467 ], 00:03:48.467 "product_name": "Malloc disk", 00:03:48.467 "block_size": 512, 00:03:48.467 "num_blocks": 16384, 00:03:48.467 "uuid": "2bb80090-063d-4cb4-9b9c-b1f4b0904e5a", 00:03:48.467 "assigned_rate_limits": { 00:03:48.467 "rw_ios_per_sec": 0, 00:03:48.467 "rw_mbytes_per_sec": 0, 00:03:48.467 "r_mbytes_per_sec": 0, 00:03:48.467 "w_mbytes_per_sec": 0 00:03:48.467 }, 00:03:48.467 "claimed": false, 00:03:48.467 "zoned": false, 00:03:48.467 "supported_io_types": { 00:03:48.467 "read": true, 00:03:48.467 "write": true, 00:03:48.467 "unmap": true, 00:03:48.467 "flush": true, 00:03:48.467 "reset": true, 00:03:48.467 "nvme_admin": false, 00:03:48.467 "nvme_io": false, 00:03:48.467 "nvme_io_md": false, 00:03:48.467 "write_zeroes": true, 00:03:48.467 "zcopy": true, 00:03:48.467 "get_zone_info": false, 00:03:48.467 "zone_management": false, 00:03:48.467 "zone_append": false, 00:03:48.467 "compare": false, 00:03:48.467 "compare_and_write": false, 00:03:48.467 "abort": true, 00:03:48.467 "seek_hole": false, 00:03:48.467 "seek_data": false, 00:03:48.467 "copy": true, 00:03:48.467 "nvme_iov_md": false 00:03:48.467 }, 00:03:48.467 "memory_domains": [ 00:03:48.467 { 00:03:48.467 "dma_device_id": "system", 00:03:48.467 "dma_device_type": 1 00:03:48.467 }, 00:03:48.467 { 00:03:48.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.467 "dma_device_type": 2 00:03:48.467 } 00:03:48.467 ], 00:03:48.467 "driver_specific": {} 00:03:48.467 } 00:03:48.467 ]' 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.467 [2024-11-20 07:04:23.221577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:48.467 [2024-11-20 07:04:23.221604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:48.467 [2024-11-20 07:04:23.221618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x92be00 00:03:48.467 [2024-11-20 07:04:23.221625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:48.467 [2024-11-20 07:04:23.222879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:48.467 [2024-11-20 07:04:23.222898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:48.467 Passthru0 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.467 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.728 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.728 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:48.728 { 00:03:48.728 "name": "Malloc2", 00:03:48.728 "aliases": [ 00:03:48.728 "2bb80090-063d-4cb4-9b9c-b1f4b0904e5a" 00:03:48.728 ], 00:03:48.728 "product_name": "Malloc disk", 00:03:48.728 "block_size": 512, 00:03:48.728 "num_blocks": 16384, 00:03:48.728 "uuid": "2bb80090-063d-4cb4-9b9c-b1f4b0904e5a", 00:03:48.728 "assigned_rate_limits": { 00:03:48.728 "rw_ios_per_sec": 0, 00:03:48.728 "rw_mbytes_per_sec": 0, 00:03:48.728 "r_mbytes_per_sec": 0, 00:03:48.728 "w_mbytes_per_sec": 0 00:03:48.728 }, 00:03:48.728 "claimed": true, 00:03:48.728 "claim_type": "exclusive_write", 00:03:48.728 "zoned": false, 00:03:48.728 "supported_io_types": { 00:03:48.728 "read": true, 00:03:48.728 "write": true, 00:03:48.728 "unmap": true, 00:03:48.728 "flush": true, 00:03:48.728 "reset": true, 00:03:48.728 "nvme_admin": false, 00:03:48.728 "nvme_io": false, 00:03:48.728 "nvme_io_md": false, 00:03:48.728 "write_zeroes": true, 00:03:48.728 "zcopy": true, 00:03:48.728 "get_zone_info": false, 00:03:48.728 "zone_management": false, 00:03:48.728 "zone_append": false, 00:03:48.728 "compare": false, 00:03:48.728 "compare_and_write": false, 00:03:48.728 "abort": true, 00:03:48.728 "seek_hole": false, 00:03:48.728 "seek_data": false, 00:03:48.728 "copy": true, 00:03:48.728 "nvme_iov_md": false 00:03:48.728 }, 00:03:48.728 "memory_domains": [ 00:03:48.728 { 00:03:48.728 "dma_device_id": "system", 00:03:48.728 "dma_device_type": 1 00:03:48.728 }, 00:03:48.728 { 00:03:48.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.728 "dma_device_type": 2 00:03:48.728 } 00:03:48.728 ], 00:03:48.728 "driver_specific": {} 00:03:48.728 }, 00:03:48.728 { 00:03:48.728 "name": "Passthru0", 00:03:48.728 "aliases": [ 00:03:48.728 "b2090248-aed9-5a94-aeb7-87ff510b1b99" 00:03:48.728 ], 00:03:48.728 "product_name": "passthru", 00:03:48.728 "block_size": 512, 00:03:48.728 "num_blocks": 16384, 00:03:48.728 "uuid": "b2090248-aed9-5a94-aeb7-87ff510b1b99", 00:03:48.728 "assigned_rate_limits": { 00:03:48.728 "rw_ios_per_sec": 0, 00:03:48.728 "rw_mbytes_per_sec": 0, 00:03:48.728 "r_mbytes_per_sec": 0, 00:03:48.728 "w_mbytes_per_sec": 0 00:03:48.728 }, 00:03:48.728 "claimed": false, 00:03:48.728 "zoned": false, 00:03:48.728 "supported_io_types": { 00:03:48.728 "read": true, 00:03:48.728 "write": true, 00:03:48.728 "unmap": true, 00:03:48.728 "flush": true, 00:03:48.728 "reset": true, 00:03:48.728 "nvme_admin": false, 00:03:48.728 "nvme_io": false, 00:03:48.728 "nvme_io_md": false, 00:03:48.728 "write_zeroes": true, 00:03:48.728 "zcopy": true, 00:03:48.728 "get_zone_info": false, 00:03:48.728 "zone_management": false, 00:03:48.728 "zone_append": false, 00:03:48.728 "compare": false, 00:03:48.728 "compare_and_write": false, 00:03:48.728 "abort": true, 00:03:48.728 "seek_hole": false, 00:03:48.728 "seek_data": false, 00:03:48.728 "copy": true, 00:03:48.728 "nvme_iov_md": false 00:03:48.728 }, 00:03:48.728 "memory_domains": [ 00:03:48.728 { 00:03:48.728 "dma_device_id": "system", 00:03:48.728 "dma_device_type": 1 00:03:48.728 }, 00:03:48.728 { 00:03:48.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.728 "dma_device_type": 2 00:03:48.728 } 00:03:48.728 ], 00:03:48.728 "driver_specific": { 00:03:48.728 "passthru": { 00:03:48.728 "name": "Passthru0", 00:03:48.728 "base_bdev_name": "Malloc2" 00:03:48.728 } 00:03:48.728 } 00:03:48.728 } 00:03:48.728 ]' 00:03:48.728 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:48.728 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:48.728 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:48.728 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.728 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:48.729 00:03:48.729 real 0m0.298s 00:03:48.729 user 0m0.182s 00:03:48.729 sys 0m0.046s 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.729 07:04:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.729 ************************************ 00:03:48.729 END TEST rpc_daemon_integrity 00:03:48.729 ************************************ 00:03:48.729 07:04:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:48.729 07:04:23 rpc -- rpc/rpc.sh@84 -- # killprocess 1026637 00:03:48.729 07:04:23 rpc -- common/autotest_common.sh@952 -- # '[' -z 1026637 ']' 00:03:48.729 07:04:23 rpc -- common/autotest_common.sh@956 -- # kill -0 1026637 00:03:48.729 07:04:23 rpc -- common/autotest_common.sh@957 -- # uname 00:03:48.729 07:04:23 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:48.729 07:04:23 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1026637 00:03:48.729 07:04:23 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:48.729 07:04:23 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:48.729 07:04:23 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1026637' 00:03:48.729 killing process with pid 1026637 00:03:48.729 07:04:23 rpc -- common/autotest_common.sh@971 -- # kill 1026637 00:03:48.729 07:04:23 rpc -- common/autotest_common.sh@976 -- # wait 1026637 00:03:48.989 00:03:48.989 real 0m2.623s 00:03:48.989 user 0m3.389s 00:03:48.989 sys 0m0.764s 00:03:48.989 07:04:23 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.989 07:04:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.989 ************************************ 00:03:48.989 END TEST rpc 00:03:48.989 ************************************ 00:03:48.989 07:04:23 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:48.989 07:04:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.989 07:04:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.989 07:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:49.251 ************************************ 00:03:49.251 START TEST skip_rpc 00:03:49.251 ************************************ 00:03:49.251 07:04:23 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:49.251 * Looking for test storage... 00:03:49.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.251 07:04:23 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:49.251 07:04:23 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:49.251 07:04:23 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:49.251 07:04:23 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.251 07:04:23 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:49.251 07:04:23 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.251 07:04:23 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:49.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.251 --rc genhtml_branch_coverage=1 00:03:49.251 --rc genhtml_function_coverage=1 00:03:49.251 --rc genhtml_legend=1 00:03:49.251 --rc geninfo_all_blocks=1 00:03:49.251 --rc geninfo_unexecuted_blocks=1 00:03:49.251 00:03:49.251 ' 00:03:49.251 07:04:23 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:49.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.251 --rc genhtml_branch_coverage=1 00:03:49.251 --rc genhtml_function_coverage=1 00:03:49.251 --rc genhtml_legend=1 00:03:49.251 --rc geninfo_all_blocks=1 00:03:49.251 --rc geninfo_unexecuted_blocks=1 00:03:49.251 00:03:49.251 ' 00:03:49.251 07:04:23 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:49.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.251 --rc genhtml_branch_coverage=1 00:03:49.251 --rc genhtml_function_coverage=1 00:03:49.251 --rc genhtml_legend=1 00:03:49.251 --rc geninfo_all_blocks=1 00:03:49.251 --rc geninfo_unexecuted_blocks=1 00:03:49.251 00:03:49.251 ' 00:03:49.251 07:04:23 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:49.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.251 --rc genhtml_branch_coverage=1 00:03:49.251 --rc genhtml_function_coverage=1 00:03:49.251 --rc genhtml_legend=1 00:03:49.251 --rc geninfo_all_blocks=1 00:03:49.251 --rc geninfo_unexecuted_blocks=1 00:03:49.251 00:03:49.251 ' 00:03:49.251 07:04:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.251 07:04:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:49.251 07:04:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:49.251 07:04:23 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:49.251 07:04:23 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:49.251 07:04:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.251 ************************************ 00:03:49.251 START TEST skip_rpc 00:03:49.251 ************************************ 00:03:49.251 07:04:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:49.252 07:04:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1027196 00:03:49.252 07:04:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.252 07:04:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:49.252 07:04:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:49.512 [2024-11-20 07:04:24.059371] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:03:49.512 [2024-11-20 07:04:24.059435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027196 ] 00:03:49.512 [2024-11-20 07:04:24.143338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.512 [2024-11-20 07:04:24.185054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1027196 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 1027196 ']' 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 1027196 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1027196 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1027196' 00:03:54.801 killing process with pid 1027196 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 1027196 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 1027196 00:03:54.801 00:03:54.801 real 0m5.284s 00:03:54.801 user 0m5.072s 00:03:54.801 sys 0m0.257s 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.801 07:04:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.801 ************************************ 00:03:54.801 END TEST skip_rpc 00:03:54.801 ************************************ 00:03:54.801 07:04:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:54.801 07:04:29 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:54.801 07:04:29 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:54.801 07:04:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.801 ************************************ 00:03:54.801 START TEST skip_rpc_with_json 00:03:54.801 ************************************ 00:03:54.801 07:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:03:54.801 07:04:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:54.801 07:04:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1028393 00:03:54.801 07:04:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:54.801 07:04:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1028393 00:03:54.801 07:04:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:54.801 07:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 1028393 ']' 00:03:54.801 07:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.801 07:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:54.801 07:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.801 07:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:54.801 07:04:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.801 [2024-11-20 07:04:29.421173] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:03:54.801 [2024-11-20 07:04:29.421226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028393 ] 00:03:54.801 [2024-11-20 07:04:29.499829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.801 [2024-11-20 07:04:29.538077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.741 [2024-11-20 07:04:30.200745] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:55.741 request: 00:03:55.741 { 00:03:55.741 "trtype": "tcp", 00:03:55.741 "method": "nvmf_get_transports", 00:03:55.741 "req_id": 1 00:03:55.741 } 00:03:55.741 Got JSON-RPC error response 00:03:55.741 response: 00:03:55.741 { 00:03:55.741 "code": -19, 00:03:55.741 "message": "No such device" 00:03:55.741 } 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.741 [2024-11-20 07:04:30.212874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.741 07:04:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:55.741 { 00:03:55.741 "subsystems": [ 00:03:55.741 { 00:03:55.741 "subsystem": "fsdev", 00:03:55.741 "config": [ 00:03:55.741 { 00:03:55.741 "method": "fsdev_set_opts", 00:03:55.741 "params": { 00:03:55.741 "fsdev_io_pool_size": 65535, 00:03:55.741 "fsdev_io_cache_size": 256 00:03:55.741 } 00:03:55.741 } 00:03:55.741 ] 00:03:55.741 }, 00:03:55.741 { 00:03:55.741 "subsystem": "vfio_user_target", 00:03:55.741 "config": null 00:03:55.741 }, 00:03:55.741 { 00:03:55.741 "subsystem": "keyring", 00:03:55.741 "config": [] 00:03:55.741 }, 00:03:55.741 { 00:03:55.741 "subsystem": "iobuf", 00:03:55.741 "config": [ 00:03:55.741 { 00:03:55.741 "method": "iobuf_set_options", 00:03:55.741 "params": { 00:03:55.741 "small_pool_count": 8192, 00:03:55.741 "large_pool_count": 1024, 00:03:55.741 "small_bufsize": 8192, 00:03:55.741 "large_bufsize": 135168, 00:03:55.741 "enable_numa": false 00:03:55.741 } 00:03:55.741 } 00:03:55.741 ] 00:03:55.741 }, 00:03:55.741 { 00:03:55.741 "subsystem": "sock", 00:03:55.741 "config": [ 00:03:55.741 { 00:03:55.741 "method": "sock_set_default_impl", 00:03:55.741 "params": { 00:03:55.741 "impl_name": "posix" 00:03:55.741 } 00:03:55.741 }, 00:03:55.741 { 00:03:55.741 "method": "sock_impl_set_options", 00:03:55.741 "params": { 00:03:55.741 "impl_name": "ssl", 00:03:55.741 "recv_buf_size": 4096, 00:03:55.741 "send_buf_size": 4096, 00:03:55.741 "enable_recv_pipe": true, 00:03:55.741 "enable_quickack": false, 00:03:55.741 "enable_placement_id": 0, 00:03:55.741 "enable_zerocopy_send_server": true, 00:03:55.741 "enable_zerocopy_send_client": false, 00:03:55.741 "zerocopy_threshold": 0, 00:03:55.741 "tls_version": 0, 00:03:55.741 "enable_ktls": false 00:03:55.741 } 00:03:55.741 }, 00:03:55.742 { 00:03:55.742 "method": "sock_impl_set_options", 00:03:55.742 "params": { 00:03:55.742 "impl_name": "posix", 00:03:55.742 "recv_buf_size": 2097152, 00:03:55.742 "send_buf_size": 2097152, 00:03:55.742 "enable_recv_pipe": true, 00:03:55.742 "enable_quickack": false, 00:03:55.742 "enable_placement_id": 0, 00:03:55.742 "enable_zerocopy_send_server": true, 00:03:55.742 "enable_zerocopy_send_client": false, 00:03:55.742 "zerocopy_threshold": 0, 00:03:55.742 "tls_version": 0, 00:03:55.742 "enable_ktls": false 00:03:55.742 } 00:03:55.742 } 00:03:55.742 ] 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "subsystem": "vmd", 00:03:55.742 "config": [] 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "subsystem": "accel", 00:03:55.742 "config": [ 00:03:55.742 { 00:03:55.742 "method": "accel_set_options", 00:03:55.742 "params": { 00:03:55.742 "small_cache_size": 128, 00:03:55.742 "large_cache_size": 16, 00:03:55.742 "task_count": 2048, 00:03:55.742 "sequence_count": 2048, 00:03:55.742 "buf_count": 2048 00:03:55.742 } 00:03:55.742 } 00:03:55.742 ] 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "subsystem": "bdev", 00:03:55.742 "config": [ 00:03:55.742 { 00:03:55.742 "method": "bdev_set_options", 00:03:55.742 "params": { 00:03:55.742 "bdev_io_pool_size": 65535, 00:03:55.742 "bdev_io_cache_size": 256, 00:03:55.742 "bdev_auto_examine": true, 00:03:55.742 "iobuf_small_cache_size": 128, 00:03:55.742 "iobuf_large_cache_size": 16 00:03:55.742 } 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "method": "bdev_raid_set_options", 00:03:55.742 "params": { 00:03:55.742 "process_window_size_kb": 1024, 00:03:55.742 "process_max_bandwidth_mb_sec": 0 00:03:55.742 } 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "method": "bdev_iscsi_set_options", 00:03:55.742 "params": { 00:03:55.742 "timeout_sec": 30 00:03:55.742 } 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "method": "bdev_nvme_set_options", 00:03:55.742 "params": { 00:03:55.742 "action_on_timeout": "none", 00:03:55.742 "timeout_us": 0, 00:03:55.742 "timeout_admin_us": 0, 00:03:55.742 "keep_alive_timeout_ms": 10000, 00:03:55.742 "arbitration_burst": 0, 00:03:55.742 "low_priority_weight": 0, 00:03:55.742 "medium_priority_weight": 0, 00:03:55.742 "high_priority_weight": 0, 00:03:55.742 "nvme_adminq_poll_period_us": 10000, 00:03:55.742 "nvme_ioq_poll_period_us": 0, 00:03:55.742 "io_queue_requests": 0, 00:03:55.742 "delay_cmd_submit": true, 00:03:55.742 "transport_retry_count": 4, 00:03:55.742 "bdev_retry_count": 3, 00:03:55.742 "transport_ack_timeout": 0, 00:03:55.742 "ctrlr_loss_timeout_sec": 0, 00:03:55.742 "reconnect_delay_sec": 0, 00:03:55.742 "fast_io_fail_timeout_sec": 0, 00:03:55.742 "disable_auto_failback": false, 00:03:55.742 "generate_uuids": false, 00:03:55.742 "transport_tos": 0, 00:03:55.742 "nvme_error_stat": false, 00:03:55.742 "rdma_srq_size": 0, 00:03:55.742 "io_path_stat": false, 00:03:55.742 "allow_accel_sequence": false, 00:03:55.742 "rdma_max_cq_size": 0, 00:03:55.742 "rdma_cm_event_timeout_ms": 0, 00:03:55.742 "dhchap_digests": [ 00:03:55.742 "sha256", 00:03:55.742 "sha384", 00:03:55.742 "sha512" 00:03:55.742 ], 00:03:55.742 "dhchap_dhgroups": [ 00:03:55.742 "null", 00:03:55.742 "ffdhe2048", 00:03:55.742 "ffdhe3072", 00:03:55.742 "ffdhe4096", 00:03:55.742 "ffdhe6144", 00:03:55.742 "ffdhe8192" 00:03:55.742 ] 00:03:55.742 } 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "method": "bdev_nvme_set_hotplug", 00:03:55.742 "params": { 00:03:55.742 "period_us": 100000, 00:03:55.742 "enable": false 00:03:55.742 } 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "method": "bdev_wait_for_examine" 00:03:55.742 } 00:03:55.742 ] 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "subsystem": "scsi", 00:03:55.742 "config": null 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "subsystem": "scheduler", 00:03:55.742 "config": [ 00:03:55.742 { 00:03:55.742 "method": "framework_set_scheduler", 00:03:55.742 "params": { 00:03:55.742 "name": "static" 00:03:55.742 } 00:03:55.742 } 00:03:55.742 ] 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "subsystem": "vhost_scsi", 00:03:55.742 "config": [] 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "subsystem": "vhost_blk", 00:03:55.742 "config": [] 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "subsystem": "ublk", 00:03:55.742 "config": [] 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "subsystem": "nbd", 00:03:55.742 "config": [] 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "subsystem": "nvmf", 00:03:55.742 "config": [ 00:03:55.742 { 00:03:55.742 "method": "nvmf_set_config", 00:03:55.742 "params": { 00:03:55.742 "discovery_filter": "match_any", 00:03:55.742 "admin_cmd_passthru": { 00:03:55.742 "identify_ctrlr": false 00:03:55.742 }, 00:03:55.742 "dhchap_digests": [ 00:03:55.742 "sha256", 00:03:55.742 "sha384", 00:03:55.742 "sha512" 00:03:55.742 ], 00:03:55.742 "dhchap_dhgroups": [ 00:03:55.742 "null", 00:03:55.742 "ffdhe2048", 00:03:55.742 "ffdhe3072", 00:03:55.742 "ffdhe4096", 00:03:55.742 "ffdhe6144", 00:03:55.742 "ffdhe8192" 00:03:55.742 ] 00:03:55.742 } 00:03:55.742 }, 00:03:55.742 { 00:03:55.742 "method": "nvmf_set_max_subsystems", 00:03:55.742 "params": { 00:03:55.742 "max_subsystems": 1024 00:03:55.742 } 00:03:55.743 }, 00:03:55.743 { 00:03:55.743 "method": "nvmf_set_crdt", 00:03:55.743 "params": { 00:03:55.743 "crdt1": 0, 00:03:55.743 "crdt2": 0, 00:03:55.743 "crdt3": 0 00:03:55.743 } 00:03:55.743 }, 00:03:55.743 { 00:03:55.743 "method": "nvmf_create_transport", 00:03:55.743 "params": { 00:03:55.743 "trtype": "TCP", 00:03:55.743 "max_queue_depth": 128, 00:03:55.743 "max_io_qpairs_per_ctrlr": 127, 00:03:55.743 "in_capsule_data_size": 4096, 00:03:55.743 "max_io_size": 131072, 00:03:55.743 "io_unit_size": 131072, 00:03:55.743 "max_aq_depth": 128, 00:03:55.743 "num_shared_buffers": 511, 00:03:55.743 "buf_cache_size": 4294967295, 00:03:55.743 "dif_insert_or_strip": false, 00:03:55.743 "zcopy": false, 00:03:55.743 "c2h_success": true, 00:03:55.743 "sock_priority": 0, 00:03:55.743 "abort_timeout_sec": 1, 00:03:55.743 "ack_timeout": 0, 00:03:55.743 "data_wr_pool_size": 0 00:03:55.743 } 00:03:55.743 } 00:03:55.743 ] 00:03:55.743 }, 00:03:55.743 { 00:03:55.743 "subsystem": "iscsi", 00:03:55.743 "config": [ 00:03:55.743 { 00:03:55.743 "method": "iscsi_set_options", 00:03:55.743 "params": { 00:03:55.743 "node_base": "iqn.2016-06.io.spdk", 00:03:55.743 "max_sessions": 128, 00:03:55.743 "max_connections_per_session": 2, 00:03:55.743 "max_queue_depth": 64, 00:03:55.743 "default_time2wait": 2, 00:03:55.743 "default_time2retain": 20, 00:03:55.743 "first_burst_length": 8192, 00:03:55.743 "immediate_data": true, 00:03:55.743 "allow_duplicated_isid": false, 00:03:55.743 "error_recovery_level": 0, 00:03:55.743 "nop_timeout": 60, 00:03:55.743 "nop_in_interval": 30, 00:03:55.743 "disable_chap": false, 00:03:55.743 "require_chap": false, 00:03:55.743 "mutual_chap": false, 00:03:55.743 "chap_group": 0, 00:03:55.743 "max_large_datain_per_connection": 64, 00:03:55.743 "max_r2t_per_connection": 4, 00:03:55.743 "pdu_pool_size": 36864, 00:03:55.743 "immediate_data_pool_size": 16384, 00:03:55.743 "data_out_pool_size": 2048 00:03:55.743 } 00:03:55.743 } 00:03:55.743 ] 00:03:55.743 } 00:03:55.743 ] 00:03:55.743 } 00:03:55.743 07:04:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:55.743 07:04:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1028393 00:03:55.743 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 1028393 ']' 00:03:55.743 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 1028393 00:03:55.743 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:55.743 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:55.743 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1028393 00:03:55.743 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:55.743 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:55.743 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1028393' 00:03:55.743 killing process with pid 1028393 00:03:55.743 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 1028393 00:03:55.743 07:04:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 1028393 00:03:56.002 07:04:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1028576 00:03:56.002 07:04:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:56.002 07:04:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1028576 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 1028576 ']' 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 1028576 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1028576 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1028576' 00:04:01.282 killing process with pid 1028576 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 1028576 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 1028576 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.282 00:04:01.282 real 0m6.582s 00:04:01.282 user 0m6.463s 00:04:01.282 sys 0m0.566s 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:01.282 07:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:01.282 ************************************ 00:04:01.282 END TEST skip_rpc_with_json 00:04:01.282 ************************************ 00:04:01.282 07:04:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:01.282 07:04:35 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:01.282 07:04:35 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:01.282 07:04:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.282 ************************************ 00:04:01.282 START TEST skip_rpc_with_delay 00:04:01.282 ************************************ 00:04:01.282 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:01.282 07:04:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.282 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:01.282 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.282 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.282 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:01.282 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.282 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:01.282 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.282 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:01.282 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.282 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:01.282 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.543 [2024-11-20 07:04:36.095530] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:01.543 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:01.543 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:01.543 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:01.543 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:01.543 00:04:01.543 real 0m0.088s 00:04:01.543 user 0m0.057s 00:04:01.543 sys 0m0.030s 00:04:01.543 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:01.543 07:04:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:01.543 ************************************ 00:04:01.543 END TEST skip_rpc_with_delay 00:04:01.543 ************************************ 00:04:01.543 07:04:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:01.543 07:04:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:01.543 07:04:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:01.543 07:04:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:01.543 07:04:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:01.543 07:04:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.543 ************************************ 00:04:01.543 START TEST exit_on_failed_rpc_init 00:04:01.543 ************************************ 00:04:01.543 07:04:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:01.544 07:04:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1029837 00:04:01.544 07:04:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1029837 00:04:01.544 07:04:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:01.544 07:04:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 1029837 ']' 00:04:01.544 07:04:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.544 07:04:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:01.544 07:04:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.544 07:04:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:01.544 07:04:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:01.544 [2024-11-20 07:04:36.253966] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:01.544 [2024-11-20 07:04:36.254016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029837 ] 00:04:01.805 [2024-11-20 07:04:36.332298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.805 [2024-11-20 07:04:36.371061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.376 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:02.376 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:02.376 07:04:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.376 07:04:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:02.377 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:02.377 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:02.377 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.377 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:02.377 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.377 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:02.377 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.377 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:02.377 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.377 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:02.377 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:02.377 [2024-11-20 07:04:37.116050] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:02.377 [2024-11-20 07:04:37.116102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029970 ] 00:04:02.637 [2024-11-20 07:04:37.208127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.637 [2024-11-20 07:04:37.244065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.637 [2024-11-20 07:04:37.244113] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:02.637 [2024-11-20 07:04:37.244123] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:02.637 [2024-11-20 07:04:37.244130] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:02.637 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:02.637 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:02.637 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:02.637 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:02.637 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:02.637 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:02.637 07:04:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:02.637 07:04:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1029837 00:04:02.638 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 1029837 ']' 00:04:02.638 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 1029837 00:04:02.638 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:02.638 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:02.638 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1029837 00:04:02.638 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:02.638 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:02.638 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1029837' 00:04:02.638 killing process with pid 1029837 00:04:02.638 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 1029837 00:04:02.638 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 1029837 00:04:02.929 00:04:02.929 real 0m1.362s 00:04:02.929 user 0m1.583s 00:04:02.929 sys 0m0.402s 00:04:02.929 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:02.929 07:04:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:02.929 ************************************ 00:04:02.929 END TEST exit_on_failed_rpc_init 00:04:02.929 ************************************ 00:04:02.929 07:04:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:02.929 00:04:02.929 real 0m13.842s 00:04:02.929 user 0m13.412s 00:04:02.929 sys 0m1.574s 00:04:02.929 07:04:37 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:02.929 07:04:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.929 ************************************ 00:04:02.929 END TEST skip_rpc 00:04:02.929 ************************************ 00:04:02.929 07:04:37 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:02.929 07:04:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:02.929 07:04:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:02.929 07:04:37 -- common/autotest_common.sh@10 -- # set +x 00:04:02.929 ************************************ 00:04:02.929 START TEST rpc_client 00:04:02.929 ************************************ 00:04:02.929 07:04:37 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:03.191 * Looking for test storage... 00:04:03.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:03.191 07:04:37 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:03.191 07:04:37 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:03.191 07:04:37 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:03.191 07:04:37 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.191 07:04:37 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:03.191 07:04:37 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.191 07:04:37 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:03.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.191 --rc genhtml_branch_coverage=1 00:04:03.191 --rc genhtml_function_coverage=1 00:04:03.191 --rc genhtml_legend=1 00:04:03.191 --rc geninfo_all_blocks=1 00:04:03.191 --rc geninfo_unexecuted_blocks=1 00:04:03.191 00:04:03.191 ' 00:04:03.191 07:04:37 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:03.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.191 --rc genhtml_branch_coverage=1 00:04:03.191 --rc genhtml_function_coverage=1 00:04:03.191 --rc genhtml_legend=1 00:04:03.191 --rc geninfo_all_blocks=1 00:04:03.191 --rc geninfo_unexecuted_blocks=1 00:04:03.191 00:04:03.191 ' 00:04:03.191 07:04:37 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:03.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.191 --rc genhtml_branch_coverage=1 00:04:03.191 --rc genhtml_function_coverage=1 00:04:03.191 --rc genhtml_legend=1 00:04:03.191 --rc geninfo_all_blocks=1 00:04:03.191 --rc geninfo_unexecuted_blocks=1 00:04:03.191 00:04:03.191 ' 00:04:03.191 07:04:37 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:03.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.191 --rc genhtml_branch_coverage=1 00:04:03.191 --rc genhtml_function_coverage=1 00:04:03.191 --rc genhtml_legend=1 00:04:03.191 --rc geninfo_all_blocks=1 00:04:03.191 --rc geninfo_unexecuted_blocks=1 00:04:03.191 00:04:03.191 ' 00:04:03.191 07:04:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:03.191 OK 00:04:03.191 07:04:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:03.191 00:04:03.191 real 0m0.233s 00:04:03.191 user 0m0.129s 00:04:03.191 sys 0m0.118s 00:04:03.191 07:04:37 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:03.191 07:04:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:03.191 ************************************ 00:04:03.191 END TEST rpc_client 00:04:03.191 ************************************ 00:04:03.191 07:04:37 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:03.191 07:04:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:03.191 07:04:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:03.191 07:04:37 -- common/autotest_common.sh@10 -- # set +x 00:04:03.453 ************************************ 00:04:03.453 START TEST json_config 00:04:03.453 ************************************ 00:04:03.453 07:04:37 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:03.453 07:04:38 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:03.453 07:04:38 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:03.453 07:04:38 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:03.453 07:04:38 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:03.453 07:04:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.453 07:04:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.453 07:04:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.453 07:04:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.453 07:04:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.453 07:04:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.453 07:04:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.453 07:04:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.453 07:04:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.453 07:04:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.453 07:04:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.453 07:04:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:03.453 07:04:38 json_config -- scripts/common.sh@345 -- # : 1 00:04:03.453 07:04:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.453 07:04:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.453 07:04:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:03.453 07:04:38 json_config -- scripts/common.sh@353 -- # local d=1 00:04:03.453 07:04:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.453 07:04:38 json_config -- scripts/common.sh@355 -- # echo 1 00:04:03.453 07:04:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.453 07:04:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:03.453 07:04:38 json_config -- scripts/common.sh@353 -- # local d=2 00:04:03.453 07:04:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.453 07:04:38 json_config -- scripts/common.sh@355 -- # echo 2 00:04:03.453 07:04:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.453 07:04:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.453 07:04:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.453 07:04:38 json_config -- scripts/common.sh@368 -- # return 0 00:04:03.453 07:04:38 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.453 07:04:38 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:03.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.453 --rc genhtml_branch_coverage=1 00:04:03.453 --rc genhtml_function_coverage=1 00:04:03.453 --rc genhtml_legend=1 00:04:03.453 --rc geninfo_all_blocks=1 00:04:03.453 --rc geninfo_unexecuted_blocks=1 00:04:03.453 00:04:03.453 ' 00:04:03.453 07:04:38 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:03.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.453 --rc genhtml_branch_coverage=1 00:04:03.453 --rc genhtml_function_coverage=1 00:04:03.453 --rc genhtml_legend=1 00:04:03.453 --rc geninfo_all_blocks=1 00:04:03.453 --rc geninfo_unexecuted_blocks=1 00:04:03.453 00:04:03.453 ' 00:04:03.453 07:04:38 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:03.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.453 --rc genhtml_branch_coverage=1 00:04:03.453 --rc genhtml_function_coverage=1 00:04:03.453 --rc genhtml_legend=1 00:04:03.453 --rc geninfo_all_blocks=1 00:04:03.453 --rc geninfo_unexecuted_blocks=1 00:04:03.453 00:04:03.453 ' 00:04:03.453 07:04:38 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:03.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.453 --rc genhtml_branch_coverage=1 00:04:03.453 --rc genhtml_function_coverage=1 00:04:03.453 --rc genhtml_legend=1 00:04:03.453 --rc geninfo_all_blocks=1 00:04:03.453 --rc geninfo_unexecuted_blocks=1 00:04:03.453 00:04:03.453 ' 00:04:03.453 07:04:38 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:03.453 07:04:38 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:03.453 07:04:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:03.453 07:04:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:03.453 07:04:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:03.453 07:04:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:03.453 07:04:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.453 07:04:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.453 07:04:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.454 07:04:38 json_config -- paths/export.sh@5 -- # export PATH 00:04:03.454 07:04:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.454 07:04:38 json_config -- nvmf/common.sh@51 -- # : 0 00:04:03.454 07:04:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:03.454 07:04:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:03.454 07:04:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:03.454 07:04:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:03.454 07:04:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:03.454 07:04:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:03.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:03.454 07:04:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:03.454 07:04:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:03.454 07:04:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:03.454 INFO: JSON configuration test init 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:03.454 07:04:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.454 07:04:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:03.454 07:04:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.454 07:04:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.454 07:04:38 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:03.454 07:04:38 json_config -- json_config/common.sh@9 -- # local app=target 00:04:03.454 07:04:38 json_config -- json_config/common.sh@10 -- # shift 00:04:03.454 07:04:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:03.454 07:04:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:03.454 07:04:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:03.454 07:04:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.454 07:04:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.454 07:04:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1030432 00:04:03.454 07:04:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:03.454 Waiting for target to run... 00:04:03.454 07:04:38 json_config -- json_config/common.sh@25 -- # waitforlisten 1030432 /var/tmp/spdk_tgt.sock 00:04:03.454 07:04:38 json_config -- common/autotest_common.sh@833 -- # '[' -z 1030432 ']' 00:04:03.454 07:04:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:03.454 07:04:38 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:03.454 07:04:38 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:03.454 07:04:38 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:03.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:03.454 07:04:38 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:03.454 07:04:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.714 [2024-11-20 07:04:38.282966] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:03.714 [2024-11-20 07:04:38.283035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030432 ] 00:04:03.974 [2024-11-20 07:04:38.573251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.974 [2024-11-20 07:04:38.602795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.544 07:04:39 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:04.544 07:04:39 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:04.544 07:04:39 json_config -- json_config/common.sh@26 -- # echo '' 00:04:04.544 00:04:04.544 07:04:39 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:04.544 07:04:39 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:04.544 07:04:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:04.544 07:04:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.544 07:04:39 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:04.544 07:04:39 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:04.544 07:04:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:04.544 07:04:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.544 07:04:39 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:04.544 07:04:39 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:04.544 07:04:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:05.115 07:04:39 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:05.115 07:04:39 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:05.115 07:04:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:05.115 07:04:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.115 07:04:39 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:05.115 07:04:39 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:05.115 07:04:39 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:05.115 07:04:39 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:05.115 07:04:39 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:05.115 07:04:39 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:05.115 07:04:39 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:05.115 07:04:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@54 -- # sort 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:05.376 07:04:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:05.376 07:04:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:05.376 07:04:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:05.376 07:04:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:05.376 07:04:39 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:05.376 07:04:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:05.376 MallocForNvmf0 00:04:05.376 07:04:40 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:05.376 07:04:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:05.636 MallocForNvmf1 00:04:05.636 07:04:40 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:05.636 07:04:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:05.896 [2024-11-20 07:04:40.462817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:05.896 07:04:40 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:05.896 07:04:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:06.157 07:04:40 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:06.157 07:04:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:06.157 07:04:40 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:06.157 07:04:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:06.418 07:04:41 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:06.418 07:04:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:06.418 [2024-11-20 07:04:41.177173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:06.679 07:04:41 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:06.679 07:04:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:06.679 07:04:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.679 07:04:41 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:06.679 07:04:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:06.679 07:04:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.679 07:04:41 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:06.679 07:04:41 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:06.679 07:04:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:06.679 MallocBdevForConfigChangeCheck 00:04:06.939 07:04:41 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:06.939 07:04:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:06.939 07:04:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.939 07:04:41 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:06.939 07:04:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:07.200 07:04:41 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:07.200 INFO: shutting down applications... 00:04:07.200 07:04:41 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:07.200 07:04:41 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:07.200 07:04:41 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:07.200 07:04:41 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:07.769 Calling clear_iscsi_subsystem 00:04:07.769 Calling clear_nvmf_subsystem 00:04:07.769 Calling clear_nbd_subsystem 00:04:07.769 Calling clear_ublk_subsystem 00:04:07.769 Calling clear_vhost_blk_subsystem 00:04:07.769 Calling clear_vhost_scsi_subsystem 00:04:07.769 Calling clear_bdev_subsystem 00:04:07.769 07:04:42 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:07.769 07:04:42 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:07.769 07:04:42 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:07.769 07:04:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:07.769 07:04:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:07.769 07:04:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:08.030 07:04:42 json_config -- json_config/json_config.sh@352 -- # break 00:04:08.030 07:04:42 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:08.030 07:04:42 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:08.030 07:04:42 json_config -- json_config/common.sh@31 -- # local app=target 00:04:08.030 07:04:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:08.030 07:04:42 json_config -- json_config/common.sh@35 -- # [[ -n 1030432 ]] 00:04:08.030 07:04:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1030432 00:04:08.030 07:04:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:08.030 07:04:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:08.030 07:04:42 json_config -- json_config/common.sh@41 -- # kill -0 1030432 00:04:08.030 07:04:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:08.601 07:04:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:08.601 07:04:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:08.601 07:04:43 json_config -- json_config/common.sh@41 -- # kill -0 1030432 00:04:08.601 07:04:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:08.601 07:04:43 json_config -- json_config/common.sh@43 -- # break 00:04:08.601 07:04:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:08.601 07:04:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:08.601 SPDK target shutdown done 00:04:08.601 07:04:43 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:08.601 INFO: relaunching applications... 00:04:08.601 07:04:43 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.601 07:04:43 json_config -- json_config/common.sh@9 -- # local app=target 00:04:08.601 07:04:43 json_config -- json_config/common.sh@10 -- # shift 00:04:08.601 07:04:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:08.601 07:04:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:08.601 07:04:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:08.601 07:04:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:08.601 07:04:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:08.601 07:04:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1031511 00:04:08.601 07:04:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:08.601 Waiting for target to run... 00:04:08.601 07:04:43 json_config -- json_config/common.sh@25 -- # waitforlisten 1031511 /var/tmp/spdk_tgt.sock 00:04:08.601 07:04:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:08.601 07:04:43 json_config -- common/autotest_common.sh@833 -- # '[' -z 1031511 ']' 00:04:08.601 07:04:43 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:08.601 07:04:43 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:08.601 07:04:43 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:08.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:08.601 07:04:43 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:08.601 07:04:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.601 [2024-11-20 07:04:43.172202] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:08.601 [2024-11-20 07:04:43.172275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031511 ] 00:04:08.862 [2024-11-20 07:04:43.480786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.862 [2024-11-20 07:04:43.510418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.434 [2024-11-20 07:04:44.035757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:09.434 [2024-11-20 07:04:44.068133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:09.434 07:04:44 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:09.434 07:04:44 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:09.434 07:04:44 json_config -- json_config/common.sh@26 -- # echo '' 00:04:09.434 00:04:09.434 07:04:44 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:09.434 07:04:44 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:09.434 INFO: Checking if target configuration is the same... 00:04:09.434 07:04:44 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:09.434 07:04:44 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:09.434 07:04:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:09.434 + '[' 2 -ne 2 ']' 00:04:09.434 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:09.434 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:09.434 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:09.434 +++ basename /dev/fd/62 00:04:09.434 ++ mktemp /tmp/62.XXX 00:04:09.434 + tmp_file_1=/tmp/62.o7s 00:04:09.434 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:09.434 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:09.434 + tmp_file_2=/tmp/spdk_tgt_config.json.EkT 00:04:09.434 + ret=0 00:04:09.434 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:09.695 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:09.956 + diff -u /tmp/62.o7s /tmp/spdk_tgt_config.json.EkT 00:04:09.956 + echo 'INFO: JSON config files are the same' 00:04:09.956 INFO: JSON config files are the same 00:04:09.956 + rm /tmp/62.o7s /tmp/spdk_tgt_config.json.EkT 00:04:09.956 + exit 0 00:04:09.956 07:04:44 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:09.956 07:04:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:09.956 INFO: changing configuration and checking if this can be detected... 00:04:09.956 07:04:44 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:09.956 07:04:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:09.956 07:04:44 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:09.956 07:04:44 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:09.956 07:04:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:09.956 + '[' 2 -ne 2 ']' 00:04:09.956 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:09.956 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:09.956 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:09.956 +++ basename /dev/fd/62 00:04:09.956 ++ mktemp /tmp/62.XXX 00:04:09.956 + tmp_file_1=/tmp/62.CkC 00:04:09.956 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:09.956 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:09.956 + tmp_file_2=/tmp/spdk_tgt_config.json.PUw 00:04:09.956 + ret=0 00:04:09.956 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:10.528 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:10.528 + diff -u /tmp/62.CkC /tmp/spdk_tgt_config.json.PUw 00:04:10.528 + ret=1 00:04:10.528 + echo '=== Start of file: /tmp/62.CkC ===' 00:04:10.528 + cat /tmp/62.CkC 00:04:10.528 + echo '=== End of file: /tmp/62.CkC ===' 00:04:10.528 + echo '' 00:04:10.528 + echo '=== Start of file: /tmp/spdk_tgt_config.json.PUw ===' 00:04:10.528 + cat /tmp/spdk_tgt_config.json.PUw 00:04:10.528 + echo '=== End of file: /tmp/spdk_tgt_config.json.PUw ===' 00:04:10.528 + echo '' 00:04:10.528 + rm /tmp/62.CkC /tmp/spdk_tgt_config.json.PUw 00:04:10.528 + exit 1 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:10.528 INFO: configuration change detected. 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@324 -- # [[ -n 1031511 ]] 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.528 07:04:45 json_config -- json_config/json_config.sh@330 -- # killprocess 1031511 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@952 -- # '[' -z 1031511 ']' 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@956 -- # kill -0 1031511 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@957 -- # uname 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1031511 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1031511' 00:04:10.528 killing process with pid 1031511 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@971 -- # kill 1031511 00:04:10.528 07:04:45 json_config -- common/autotest_common.sh@976 -- # wait 1031511 00:04:10.790 07:04:45 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.790 07:04:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:10.790 07:04:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:10.790 07:04:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.790 07:04:45 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:10.790 07:04:45 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:10.790 INFO: Success 00:04:10.790 00:04:10.790 real 0m7.527s 00:04:10.790 user 0m9.101s 00:04:10.790 sys 0m2.026s 00:04:10.790 07:04:45 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.790 07:04:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.790 ************************************ 00:04:10.790 END TEST json_config 00:04:10.790 ************************************ 00:04:10.790 07:04:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:10.790 07:04:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.790 07:04:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.790 07:04:45 -- common/autotest_common.sh@10 -- # set +x 00:04:11.051 ************************************ 00:04:11.051 START TEST json_config_extra_key 00:04:11.051 ************************************ 00:04:11.051 07:04:45 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:11.051 07:04:45 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:11.051 07:04:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:11.051 07:04:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:11.051 07:04:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:11.051 07:04:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:11.052 07:04:45 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.052 07:04:45 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:11.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.052 --rc genhtml_branch_coverage=1 00:04:11.052 --rc genhtml_function_coverage=1 00:04:11.052 --rc genhtml_legend=1 00:04:11.052 --rc geninfo_all_blocks=1 00:04:11.052 --rc geninfo_unexecuted_blocks=1 00:04:11.052 00:04:11.052 ' 00:04:11.052 07:04:45 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:11.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.052 --rc genhtml_branch_coverage=1 00:04:11.052 --rc genhtml_function_coverage=1 00:04:11.052 --rc genhtml_legend=1 00:04:11.052 --rc geninfo_all_blocks=1 00:04:11.052 --rc geninfo_unexecuted_blocks=1 00:04:11.052 00:04:11.052 ' 00:04:11.052 07:04:45 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:11.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.052 --rc genhtml_branch_coverage=1 00:04:11.052 --rc genhtml_function_coverage=1 00:04:11.052 --rc genhtml_legend=1 00:04:11.052 --rc geninfo_all_blocks=1 00:04:11.052 --rc geninfo_unexecuted_blocks=1 00:04:11.052 00:04:11.052 ' 00:04:11.052 07:04:45 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:11.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.052 --rc genhtml_branch_coverage=1 00:04:11.052 --rc genhtml_function_coverage=1 00:04:11.052 --rc genhtml_legend=1 00:04:11.052 --rc geninfo_all_blocks=1 00:04:11.052 --rc geninfo_unexecuted_blocks=1 00:04:11.052 00:04:11.052 ' 00:04:11.052 07:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.052 07:04:45 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.052 07:04:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.052 07:04:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.052 07:04:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.052 07:04:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:11.052 07:04:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:11.052 07:04:45 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:11.052 07:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:11.052 07:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:11.052 07:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:11.052 07:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:11.052 07:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:11.052 07:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:11.052 07:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:11.052 07:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:11.052 07:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:11.052 07:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:11.052 07:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:11.052 INFO: launching applications... 00:04:11.052 07:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:11.052 07:04:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:11.052 07:04:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:11.052 07:04:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:11.052 07:04:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:11.052 07:04:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:11.052 07:04:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.052 07:04:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.052 07:04:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1032038 00:04:11.052 07:04:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:11.052 Waiting for target to run... 00:04:11.052 07:04:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1032038 /var/tmp/spdk_tgt.sock 00:04:11.052 07:04:45 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 1032038 ']' 00:04:11.052 07:04:45 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.052 07:04:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:11.052 07:04:45 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:11.052 07:04:45 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.053 07:04:45 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:11.053 07:04:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:11.313 [2024-11-20 07:04:45.851602] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:11.314 [2024-11-20 07:04:45.851679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032038 ] 00:04:11.574 [2024-11-20 07:04:46.147314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.574 [2024-11-20 07:04:46.177022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.230 07:04:46 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:12.230 07:04:46 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:12.230 07:04:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:12.230 00:04:12.230 07:04:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:12.230 INFO: shutting down applications... 00:04:12.230 07:04:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:12.230 07:04:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:12.230 07:04:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:12.230 07:04:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1032038 ]] 00:04:12.230 07:04:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1032038 00:04:12.230 07:04:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:12.230 07:04:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.230 07:04:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1032038 00:04:12.230 07:04:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:12.541 07:04:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:12.541 07:04:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:12.541 07:04:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1032038 00:04:12.541 07:04:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:12.541 07:04:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:12.541 07:04:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:12.541 07:04:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:12.541 SPDK target shutdown done 00:04:12.541 07:04:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:12.541 Success 00:04:12.541 00:04:12.541 real 0m1.562s 00:04:12.541 user 0m1.192s 00:04:12.541 sys 0m0.409s 00:04:12.541 07:04:47 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.541 07:04:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:12.541 ************************************ 00:04:12.541 END TEST json_config_extra_key 00:04:12.541 ************************************ 00:04:12.541 07:04:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:12.541 07:04:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:12.541 07:04:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.541 07:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:12.541 ************************************ 00:04:12.541 START TEST alias_rpc 00:04:12.541 ************************************ 00:04:12.541 07:04:47 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:12.541 * Looking for test storage... 00:04:12.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:12.802 07:04:47 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:12.802 07:04:47 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:12.802 07:04:47 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:12.802 07:04:47 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:12.802 07:04:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.802 07:04:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.802 07:04:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.802 07:04:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.802 07:04:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.802 07:04:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.802 07:04:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.802 07:04:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.802 07:04:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.802 07:04:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.803 07:04:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:12.803 07:04:47 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.803 07:04:47 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:12.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.803 --rc genhtml_branch_coverage=1 00:04:12.803 --rc genhtml_function_coverage=1 00:04:12.803 --rc genhtml_legend=1 00:04:12.803 --rc geninfo_all_blocks=1 00:04:12.803 --rc geninfo_unexecuted_blocks=1 00:04:12.803 00:04:12.803 ' 00:04:12.803 07:04:47 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:12.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.803 --rc genhtml_branch_coverage=1 00:04:12.803 --rc genhtml_function_coverage=1 00:04:12.803 --rc genhtml_legend=1 00:04:12.803 --rc geninfo_all_blocks=1 00:04:12.803 --rc geninfo_unexecuted_blocks=1 00:04:12.803 00:04:12.803 ' 00:04:12.803 07:04:47 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:12.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.803 --rc genhtml_branch_coverage=1 00:04:12.803 --rc genhtml_function_coverage=1 00:04:12.803 --rc genhtml_legend=1 00:04:12.803 --rc geninfo_all_blocks=1 00:04:12.803 --rc geninfo_unexecuted_blocks=1 00:04:12.803 00:04:12.803 ' 00:04:12.803 07:04:47 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:12.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.803 --rc genhtml_branch_coverage=1 00:04:12.803 --rc genhtml_function_coverage=1 00:04:12.803 --rc genhtml_legend=1 00:04:12.803 --rc geninfo_all_blocks=1 00:04:12.803 --rc geninfo_unexecuted_blocks=1 00:04:12.803 00:04:12.803 ' 00:04:12.803 07:04:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:12.803 07:04:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1032438 00:04:12.803 07:04:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1032438 00:04:12.803 07:04:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.803 07:04:47 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 1032438 ']' 00:04:12.803 07:04:47 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.803 07:04:47 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:12.803 07:04:47 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.803 07:04:47 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:12.803 07:04:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.803 [2024-11-20 07:04:47.467642] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:12.803 [2024-11-20 07:04:47.467693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032438 ] 00:04:12.803 [2024-11-20 07:04:47.546701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.064 [2024-11-20 07:04:47.582592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.635 07:04:48 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:13.635 07:04:48 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:13.635 07:04:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:13.896 07:04:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1032438 00:04:13.896 07:04:48 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 1032438 ']' 00:04:13.896 07:04:48 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 1032438 00:04:13.896 07:04:48 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:13.896 07:04:48 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:13.896 07:04:48 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1032438 00:04:13.896 07:04:48 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:13.896 07:04:48 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:13.896 07:04:48 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1032438' 00:04:13.896 killing process with pid 1032438 00:04:13.896 07:04:48 alias_rpc -- common/autotest_common.sh@971 -- # kill 1032438 00:04:13.896 07:04:48 alias_rpc -- common/autotest_common.sh@976 -- # wait 1032438 00:04:14.157 00:04:14.157 real 0m1.518s 00:04:14.157 user 0m1.689s 00:04:14.157 sys 0m0.402s 00:04:14.157 07:04:48 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.157 07:04:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.157 ************************************ 00:04:14.157 END TEST alias_rpc 00:04:14.157 ************************************ 00:04:14.157 07:04:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:14.157 07:04:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:14.157 07:04:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.157 07:04:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.157 07:04:48 -- common/autotest_common.sh@10 -- # set +x 00:04:14.157 ************************************ 00:04:14.157 START TEST spdkcli_tcp 00:04:14.157 ************************************ 00:04:14.157 07:04:48 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:14.157 * Looking for test storage... 00:04:14.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:14.157 07:04:48 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:14.157 07:04:48 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:14.157 07:04:48 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:14.418 07:04:48 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:14.418 07:04:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.418 07:04:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:14.418 07:04:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.418 07:04:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.418 07:04:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.418 07:04:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:14.418 07:04:49 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.418 07:04:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:14.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.418 --rc genhtml_branch_coverage=1 00:04:14.418 --rc genhtml_function_coverage=1 00:04:14.418 --rc genhtml_legend=1 00:04:14.418 --rc geninfo_all_blocks=1 00:04:14.418 --rc geninfo_unexecuted_blocks=1 00:04:14.418 00:04:14.418 ' 00:04:14.418 07:04:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:14.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.418 --rc genhtml_branch_coverage=1 00:04:14.418 --rc genhtml_function_coverage=1 00:04:14.418 --rc genhtml_legend=1 00:04:14.418 --rc geninfo_all_blocks=1 00:04:14.418 --rc geninfo_unexecuted_blocks=1 00:04:14.418 00:04:14.418 ' 00:04:14.418 07:04:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:14.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.418 --rc genhtml_branch_coverage=1 00:04:14.418 --rc genhtml_function_coverage=1 00:04:14.418 --rc genhtml_legend=1 00:04:14.418 --rc geninfo_all_blocks=1 00:04:14.418 --rc geninfo_unexecuted_blocks=1 00:04:14.418 00:04:14.418 ' 00:04:14.418 07:04:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:14.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.418 --rc genhtml_branch_coverage=1 00:04:14.418 --rc genhtml_function_coverage=1 00:04:14.418 --rc genhtml_legend=1 00:04:14.418 --rc geninfo_all_blocks=1 00:04:14.418 --rc geninfo_unexecuted_blocks=1 00:04:14.418 00:04:14.418 ' 00:04:14.418 07:04:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:14.418 07:04:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:14.418 07:04:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:14.418 07:04:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:14.418 07:04:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:14.418 07:04:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:14.418 07:04:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:14.418 07:04:49 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:14.418 07:04:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.418 07:04:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1032829 00:04:14.418 07:04:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1032829 00:04:14.418 07:04:49 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 1032829 ']' 00:04:14.418 07:04:49 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.418 07:04:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:14.418 07:04:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.418 07:04:49 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:14.418 07:04:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.418 07:04:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:14.418 [2024-11-20 07:04:49.070349] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:14.418 [2024-11-20 07:04:49.070423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032829 ] 00:04:14.418 [2024-11-20 07:04:49.154271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.678 [2024-11-20 07:04:49.197427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.678 [2024-11-20 07:04:49.197431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.248 07:04:49 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:15.248 07:04:49 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:15.248 07:04:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1033106 00:04:15.248 07:04:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:15.248 07:04:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:15.515 [ 00:04:15.515 "bdev_malloc_delete", 00:04:15.515 "bdev_malloc_create", 00:04:15.515 "bdev_null_resize", 00:04:15.515 "bdev_null_delete", 00:04:15.515 "bdev_null_create", 00:04:15.515 "bdev_nvme_cuse_unregister", 00:04:15.515 "bdev_nvme_cuse_register", 00:04:15.515 "bdev_opal_new_user", 00:04:15.515 "bdev_opal_set_lock_state", 00:04:15.515 "bdev_opal_delete", 00:04:15.515 "bdev_opal_get_info", 00:04:15.515 "bdev_opal_create", 00:04:15.515 "bdev_nvme_opal_revert", 00:04:15.515 "bdev_nvme_opal_init", 00:04:15.515 "bdev_nvme_send_cmd", 00:04:15.515 "bdev_nvme_set_keys", 00:04:15.515 "bdev_nvme_get_path_iostat", 00:04:15.515 "bdev_nvme_get_mdns_discovery_info", 00:04:15.515 "bdev_nvme_stop_mdns_discovery", 00:04:15.515 "bdev_nvme_start_mdns_discovery", 00:04:15.515 "bdev_nvme_set_multipath_policy", 00:04:15.515 "bdev_nvme_set_preferred_path", 00:04:15.515 "bdev_nvme_get_io_paths", 00:04:15.515 "bdev_nvme_remove_error_injection", 00:04:15.515 "bdev_nvme_add_error_injection", 00:04:15.515 "bdev_nvme_get_discovery_info", 00:04:15.515 "bdev_nvme_stop_discovery", 00:04:15.515 "bdev_nvme_start_discovery", 00:04:15.515 "bdev_nvme_get_controller_health_info", 00:04:15.515 "bdev_nvme_disable_controller", 00:04:15.515 "bdev_nvme_enable_controller", 00:04:15.515 "bdev_nvme_reset_controller", 00:04:15.515 "bdev_nvme_get_transport_statistics", 00:04:15.515 "bdev_nvme_apply_firmware", 00:04:15.515 "bdev_nvme_detach_controller", 00:04:15.515 "bdev_nvme_get_controllers", 00:04:15.515 "bdev_nvme_attach_controller", 00:04:15.515 "bdev_nvme_set_hotplug", 00:04:15.515 "bdev_nvme_set_options", 00:04:15.515 "bdev_passthru_delete", 00:04:15.515 "bdev_passthru_create", 00:04:15.515 "bdev_lvol_set_parent_bdev", 00:04:15.515 "bdev_lvol_set_parent", 00:04:15.515 "bdev_lvol_check_shallow_copy", 00:04:15.515 "bdev_lvol_start_shallow_copy", 00:04:15.515 "bdev_lvol_grow_lvstore", 00:04:15.515 "bdev_lvol_get_lvols", 00:04:15.515 "bdev_lvol_get_lvstores", 00:04:15.515 "bdev_lvol_delete", 00:04:15.515 "bdev_lvol_set_read_only", 00:04:15.515 "bdev_lvol_resize", 00:04:15.515 "bdev_lvol_decouple_parent", 00:04:15.515 "bdev_lvol_inflate", 00:04:15.515 "bdev_lvol_rename", 00:04:15.515 "bdev_lvol_clone_bdev", 00:04:15.515 "bdev_lvol_clone", 00:04:15.515 "bdev_lvol_snapshot", 00:04:15.515 "bdev_lvol_create", 00:04:15.515 "bdev_lvol_delete_lvstore", 00:04:15.515 "bdev_lvol_rename_lvstore", 00:04:15.515 "bdev_lvol_create_lvstore", 00:04:15.515 "bdev_raid_set_options", 00:04:15.515 "bdev_raid_remove_base_bdev", 00:04:15.515 "bdev_raid_add_base_bdev", 00:04:15.515 "bdev_raid_delete", 00:04:15.515 "bdev_raid_create", 00:04:15.515 "bdev_raid_get_bdevs", 00:04:15.515 "bdev_error_inject_error", 00:04:15.515 "bdev_error_delete", 00:04:15.515 "bdev_error_create", 00:04:15.515 "bdev_split_delete", 00:04:15.515 "bdev_split_create", 00:04:15.515 "bdev_delay_delete", 00:04:15.515 "bdev_delay_create", 00:04:15.515 "bdev_delay_update_latency", 00:04:15.515 "bdev_zone_block_delete", 00:04:15.515 "bdev_zone_block_create", 00:04:15.515 "blobfs_create", 00:04:15.515 "blobfs_detect", 00:04:15.515 "blobfs_set_cache_size", 00:04:15.515 "bdev_aio_delete", 00:04:15.515 "bdev_aio_rescan", 00:04:15.515 "bdev_aio_create", 00:04:15.515 "bdev_ftl_set_property", 00:04:15.515 "bdev_ftl_get_properties", 00:04:15.515 "bdev_ftl_get_stats", 00:04:15.515 "bdev_ftl_unmap", 00:04:15.515 "bdev_ftl_unload", 00:04:15.515 "bdev_ftl_delete", 00:04:15.515 "bdev_ftl_load", 00:04:15.515 "bdev_ftl_create", 00:04:15.515 "bdev_virtio_attach_controller", 00:04:15.515 "bdev_virtio_scsi_get_devices", 00:04:15.515 "bdev_virtio_detach_controller", 00:04:15.515 "bdev_virtio_blk_set_hotplug", 00:04:15.515 "bdev_iscsi_delete", 00:04:15.515 "bdev_iscsi_create", 00:04:15.515 "bdev_iscsi_set_options", 00:04:15.515 "accel_error_inject_error", 00:04:15.515 "ioat_scan_accel_module", 00:04:15.515 "dsa_scan_accel_module", 00:04:15.515 "iaa_scan_accel_module", 00:04:15.515 "vfu_virtio_create_fs_endpoint", 00:04:15.515 "vfu_virtio_create_scsi_endpoint", 00:04:15.515 "vfu_virtio_scsi_remove_target", 00:04:15.515 "vfu_virtio_scsi_add_target", 00:04:15.515 "vfu_virtio_create_blk_endpoint", 00:04:15.515 "vfu_virtio_delete_endpoint", 00:04:15.516 "keyring_file_remove_key", 00:04:15.516 "keyring_file_add_key", 00:04:15.516 "keyring_linux_set_options", 00:04:15.516 "fsdev_aio_delete", 00:04:15.516 "fsdev_aio_create", 00:04:15.516 "iscsi_get_histogram", 00:04:15.516 "iscsi_enable_histogram", 00:04:15.516 "iscsi_set_options", 00:04:15.516 "iscsi_get_auth_groups", 00:04:15.516 "iscsi_auth_group_remove_secret", 00:04:15.516 "iscsi_auth_group_add_secret", 00:04:15.516 "iscsi_delete_auth_group", 00:04:15.516 "iscsi_create_auth_group", 00:04:15.516 "iscsi_set_discovery_auth", 00:04:15.516 "iscsi_get_options", 00:04:15.516 "iscsi_target_node_request_logout", 00:04:15.516 "iscsi_target_node_set_redirect", 00:04:15.516 "iscsi_target_node_set_auth", 00:04:15.516 "iscsi_target_node_add_lun", 00:04:15.516 "iscsi_get_stats", 00:04:15.516 "iscsi_get_connections", 00:04:15.516 "iscsi_portal_group_set_auth", 00:04:15.516 "iscsi_start_portal_group", 00:04:15.516 "iscsi_delete_portal_group", 00:04:15.516 "iscsi_create_portal_group", 00:04:15.516 "iscsi_get_portal_groups", 00:04:15.516 "iscsi_delete_target_node", 00:04:15.516 "iscsi_target_node_remove_pg_ig_maps", 00:04:15.516 "iscsi_target_node_add_pg_ig_maps", 00:04:15.516 "iscsi_create_target_node", 00:04:15.516 "iscsi_get_target_nodes", 00:04:15.516 "iscsi_delete_initiator_group", 00:04:15.516 "iscsi_initiator_group_remove_initiators", 00:04:15.516 "iscsi_initiator_group_add_initiators", 00:04:15.516 "iscsi_create_initiator_group", 00:04:15.516 "iscsi_get_initiator_groups", 00:04:15.516 "nvmf_set_crdt", 00:04:15.516 "nvmf_set_config", 00:04:15.516 "nvmf_set_max_subsystems", 00:04:15.516 "nvmf_stop_mdns_prr", 00:04:15.516 "nvmf_publish_mdns_prr", 00:04:15.516 "nvmf_subsystem_get_listeners", 00:04:15.516 "nvmf_subsystem_get_qpairs", 00:04:15.516 "nvmf_subsystem_get_controllers", 00:04:15.516 "nvmf_get_stats", 00:04:15.516 "nvmf_get_transports", 00:04:15.516 "nvmf_create_transport", 00:04:15.516 "nvmf_get_targets", 00:04:15.516 "nvmf_delete_target", 00:04:15.516 "nvmf_create_target", 00:04:15.516 "nvmf_subsystem_allow_any_host", 00:04:15.516 "nvmf_subsystem_set_keys", 00:04:15.516 "nvmf_subsystem_remove_host", 00:04:15.516 "nvmf_subsystem_add_host", 00:04:15.516 "nvmf_ns_remove_host", 00:04:15.516 "nvmf_ns_add_host", 00:04:15.516 "nvmf_subsystem_remove_ns", 00:04:15.516 "nvmf_subsystem_set_ns_ana_group", 00:04:15.516 "nvmf_subsystem_add_ns", 00:04:15.516 "nvmf_subsystem_listener_set_ana_state", 00:04:15.516 "nvmf_discovery_get_referrals", 00:04:15.516 "nvmf_discovery_remove_referral", 00:04:15.516 "nvmf_discovery_add_referral", 00:04:15.516 "nvmf_subsystem_remove_listener", 00:04:15.516 "nvmf_subsystem_add_listener", 00:04:15.516 "nvmf_delete_subsystem", 00:04:15.516 "nvmf_create_subsystem", 00:04:15.516 "nvmf_get_subsystems", 00:04:15.516 "env_dpdk_get_mem_stats", 00:04:15.516 "nbd_get_disks", 00:04:15.516 "nbd_stop_disk", 00:04:15.516 "nbd_start_disk", 00:04:15.516 "ublk_recover_disk", 00:04:15.516 "ublk_get_disks", 00:04:15.516 "ublk_stop_disk", 00:04:15.516 "ublk_start_disk", 00:04:15.516 "ublk_destroy_target", 00:04:15.516 "ublk_create_target", 00:04:15.516 "virtio_blk_create_transport", 00:04:15.516 "virtio_blk_get_transports", 00:04:15.516 "vhost_controller_set_coalescing", 00:04:15.516 "vhost_get_controllers", 00:04:15.516 "vhost_delete_controller", 00:04:15.516 "vhost_create_blk_controller", 00:04:15.516 "vhost_scsi_controller_remove_target", 00:04:15.516 "vhost_scsi_controller_add_target", 00:04:15.516 "vhost_start_scsi_controller", 00:04:15.516 "vhost_create_scsi_controller", 00:04:15.516 "thread_set_cpumask", 00:04:15.516 "scheduler_set_options", 00:04:15.516 "framework_get_governor", 00:04:15.516 "framework_get_scheduler", 00:04:15.516 "framework_set_scheduler", 00:04:15.516 "framework_get_reactors", 00:04:15.516 "thread_get_io_channels", 00:04:15.516 "thread_get_pollers", 00:04:15.516 "thread_get_stats", 00:04:15.516 "framework_monitor_context_switch", 00:04:15.516 "spdk_kill_instance", 00:04:15.516 "log_enable_timestamps", 00:04:15.516 "log_get_flags", 00:04:15.516 "log_clear_flag", 00:04:15.516 "log_set_flag", 00:04:15.516 "log_get_level", 00:04:15.516 "log_set_level", 00:04:15.516 "log_get_print_level", 00:04:15.516 "log_set_print_level", 00:04:15.516 "framework_enable_cpumask_locks", 00:04:15.516 "framework_disable_cpumask_locks", 00:04:15.516 "framework_wait_init", 00:04:15.516 "framework_start_init", 00:04:15.516 "scsi_get_devices", 00:04:15.516 "bdev_get_histogram", 00:04:15.516 "bdev_enable_histogram", 00:04:15.516 "bdev_set_qos_limit", 00:04:15.516 "bdev_set_qd_sampling_period", 00:04:15.516 "bdev_get_bdevs", 00:04:15.516 "bdev_reset_iostat", 00:04:15.516 "bdev_get_iostat", 00:04:15.516 "bdev_examine", 00:04:15.516 "bdev_wait_for_examine", 00:04:15.516 "bdev_set_options", 00:04:15.516 "accel_get_stats", 00:04:15.516 "accel_set_options", 00:04:15.516 "accel_set_driver", 00:04:15.516 "accel_crypto_key_destroy", 00:04:15.516 "accel_crypto_keys_get", 00:04:15.516 "accel_crypto_key_create", 00:04:15.516 "accel_assign_opc", 00:04:15.516 "accel_get_module_info", 00:04:15.516 "accel_get_opc_assignments", 00:04:15.516 "vmd_rescan", 00:04:15.516 "vmd_remove_device", 00:04:15.516 "vmd_enable", 00:04:15.516 "sock_get_default_impl", 00:04:15.516 "sock_set_default_impl", 00:04:15.516 "sock_impl_set_options", 00:04:15.516 "sock_impl_get_options", 00:04:15.516 "iobuf_get_stats", 00:04:15.516 "iobuf_set_options", 00:04:15.516 "keyring_get_keys", 00:04:15.516 "vfu_tgt_set_base_path", 00:04:15.516 "framework_get_pci_devices", 00:04:15.516 "framework_get_config", 00:04:15.516 "framework_get_subsystems", 00:04:15.516 "fsdev_set_opts", 00:04:15.516 "fsdev_get_opts", 00:04:15.516 "trace_get_info", 00:04:15.516 "trace_get_tpoint_group_mask", 00:04:15.516 "trace_disable_tpoint_group", 00:04:15.516 "trace_enable_tpoint_group", 00:04:15.516 "trace_clear_tpoint_mask", 00:04:15.516 "trace_set_tpoint_mask", 00:04:15.516 "notify_get_notifications", 00:04:15.516 "notify_get_types", 00:04:15.516 "spdk_get_version", 00:04:15.516 "rpc_get_methods" 00:04:15.516 ] 00:04:15.516 07:04:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:15.516 07:04:50 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:15.516 07:04:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:15.516 07:04:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:15.516 07:04:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1032829 00:04:15.516 07:04:50 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 1032829 ']' 00:04:15.516 07:04:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 1032829 00:04:15.516 07:04:50 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:15.516 07:04:50 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:15.516 07:04:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1032829 00:04:15.516 07:04:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:15.516 07:04:50 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:15.516 07:04:50 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1032829' 00:04:15.516 killing process with pid 1032829 00:04:15.516 07:04:50 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 1032829 00:04:15.516 07:04:50 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 1032829 00:04:15.778 00:04:15.778 real 0m1.539s 00:04:15.778 user 0m2.810s 00:04:15.778 sys 0m0.454s 00:04:15.778 07:04:50 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.778 07:04:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:15.778 ************************************ 00:04:15.778 END TEST spdkcli_tcp 00:04:15.778 ************************************ 00:04:15.778 07:04:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:15.778 07:04:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:15.778 07:04:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.778 07:04:50 -- common/autotest_common.sh@10 -- # set +x 00:04:15.778 ************************************ 00:04:15.778 START TEST dpdk_mem_utility 00:04:15.778 ************************************ 00:04:15.778 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:15.778 * Looking for test storage... 00:04:15.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:15.778 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:15.778 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:15.778 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:16.041 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.041 07:04:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:16.041 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.041 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.041 --rc genhtml_branch_coverage=1 00:04:16.041 --rc genhtml_function_coverage=1 00:04:16.041 --rc genhtml_legend=1 00:04:16.041 --rc geninfo_all_blocks=1 00:04:16.041 --rc geninfo_unexecuted_blocks=1 00:04:16.041 00:04:16.041 ' 00:04:16.041 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.041 --rc genhtml_branch_coverage=1 00:04:16.041 --rc genhtml_function_coverage=1 00:04:16.041 --rc genhtml_legend=1 00:04:16.041 --rc geninfo_all_blocks=1 00:04:16.041 --rc geninfo_unexecuted_blocks=1 00:04:16.041 00:04:16.041 ' 00:04:16.041 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.041 --rc genhtml_branch_coverage=1 00:04:16.041 --rc genhtml_function_coverage=1 00:04:16.041 --rc genhtml_legend=1 00:04:16.041 --rc geninfo_all_blocks=1 00:04:16.041 --rc geninfo_unexecuted_blocks=1 00:04:16.041 00:04:16.041 ' 00:04:16.041 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.041 --rc genhtml_branch_coverage=1 00:04:16.041 --rc genhtml_function_coverage=1 00:04:16.041 --rc genhtml_legend=1 00:04:16.041 --rc geninfo_all_blocks=1 00:04:16.041 --rc geninfo_unexecuted_blocks=1 00:04:16.041 00:04:16.041 ' 00:04:16.041 07:04:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:16.041 07:04:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1033251 00:04:16.041 07:04:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1033251 00:04:16.041 07:04:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.041 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 1033251 ']' 00:04:16.041 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.041 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:16.041 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.041 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:16.041 07:04:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:16.041 [2024-11-20 07:04:50.667302] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:16.041 [2024-11-20 07:04:50.667375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033251 ] 00:04:16.041 [2024-11-20 07:04:50.750001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.041 [2024-11-20 07:04:50.792037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.985 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:16.985 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:16.985 07:04:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:16.985 07:04:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:16.985 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.985 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:16.985 { 00:04:16.985 "filename": "/tmp/spdk_mem_dump.txt" 00:04:16.985 } 00:04:16.985 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.985 07:04:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:16.985 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:16.985 1 heaps totaling size 818.000000 MiB 00:04:16.985 size: 818.000000 MiB heap id: 0 00:04:16.985 end heaps---------- 00:04:16.985 9 mempools totaling size 603.782043 MiB 00:04:16.985 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:16.985 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:16.985 size: 100.555481 MiB name: bdev_io_1033251 00:04:16.985 size: 50.003479 MiB name: msgpool_1033251 00:04:16.985 size: 36.509338 MiB name: fsdev_io_1033251 00:04:16.986 size: 21.763794 MiB name: PDU_Pool 00:04:16.986 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:16.986 size: 4.133484 MiB name: evtpool_1033251 00:04:16.986 size: 0.026123 MiB name: Session_Pool 00:04:16.986 end mempools------- 00:04:16.986 6 memzones totaling size 4.142822 MiB 00:04:16.986 size: 1.000366 MiB name: RG_ring_0_1033251 00:04:16.986 size: 1.000366 MiB name: RG_ring_1_1033251 00:04:16.986 size: 1.000366 MiB name: RG_ring_4_1033251 00:04:16.986 size: 1.000366 MiB name: RG_ring_5_1033251 00:04:16.986 size: 0.125366 MiB name: RG_ring_2_1033251 00:04:16.986 size: 0.015991 MiB name: RG_ring_3_1033251 00:04:16.986 end memzones------- 00:04:16.986 07:04:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:16.986 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:16.986 list of free elements. size: 10.852478 MiB 00:04:16.986 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:16.986 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:16.986 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:16.986 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:16.986 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:16.986 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:16.986 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:16.986 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:16.986 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:16.986 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:16.986 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:16.986 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:16.986 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:16.986 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:16.986 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:16.986 list of standard malloc elements. size: 199.218628 MiB 00:04:16.986 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:16.986 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:16.986 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:16.986 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:16.986 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:16.986 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:16.986 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:16.986 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:16.986 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:16.986 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:16.986 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:16.986 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:16.986 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:16.986 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:16.986 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:16.986 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:16.986 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:16.986 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:16.986 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:16.986 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:16.986 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:16.986 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:16.986 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:16.986 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:16.986 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:16.986 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:16.986 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:16.986 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:16.986 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:16.986 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:16.986 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:16.986 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:16.986 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:16.986 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:16.986 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:16.986 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:16.986 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:16.986 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:16.986 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:16.986 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:16.986 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:16.986 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:16.986 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:16.986 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:16.986 list of memzone associated elements. size: 607.928894 MiB 00:04:16.986 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:16.986 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:16.986 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:16.986 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:16.986 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:16.986 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1033251_0 00:04:16.986 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:16.986 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1033251_0 00:04:16.986 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:16.986 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1033251_0 00:04:16.986 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:16.986 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:16.986 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:16.986 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:16.986 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:16.986 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1033251_0 00:04:16.986 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:16.986 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1033251 00:04:16.986 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:16.986 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1033251 00:04:16.986 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:16.986 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:16.986 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:16.986 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:16.986 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:16.986 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:16.986 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:16.986 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:16.986 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:16.986 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1033251 00:04:16.986 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:16.986 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1033251 00:04:16.986 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:16.986 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1033251 00:04:16.986 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:16.986 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1033251 00:04:16.986 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:16.986 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1033251 00:04:16.986 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:16.986 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1033251 00:04:16.986 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:16.986 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:16.986 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:16.986 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:16.986 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:16.986 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:16.986 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:16.986 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1033251 00:04:16.986 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:16.986 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1033251 00:04:16.986 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:16.986 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:16.986 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:16.986 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:16.986 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:16.986 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1033251 00:04:16.986 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:16.986 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:16.986 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:16.986 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1033251 00:04:16.986 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:16.986 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1033251 00:04:16.986 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:16.986 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1033251 00:04:16.986 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:16.986 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:16.986 07:04:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:16.986 07:04:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1033251 00:04:16.987 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 1033251 ']' 00:04:16.987 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 1033251 00:04:16.987 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:16.987 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:16.987 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1033251 00:04:16.987 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:16.987 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:16.987 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1033251' 00:04:16.987 killing process with pid 1033251 00:04:16.987 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 1033251 00:04:16.987 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 1033251 00:04:17.248 00:04:17.248 real 0m1.380s 00:04:17.248 user 0m1.445s 00:04:17.248 sys 0m0.418s 00:04:17.248 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:17.248 07:04:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:17.248 ************************************ 00:04:17.248 END TEST dpdk_mem_utility 00:04:17.248 ************************************ 00:04:17.248 07:04:51 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:17.248 07:04:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:17.248 07:04:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:17.249 07:04:51 -- common/autotest_common.sh@10 -- # set +x 00:04:17.249 ************************************ 00:04:17.249 START TEST event 00:04:17.249 ************************************ 00:04:17.249 07:04:51 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:17.249 * Looking for test storage... 00:04:17.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:17.249 07:04:51 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:17.249 07:04:51 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:17.249 07:04:51 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:17.509 07:04:52 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:17.510 07:04:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.510 07:04:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.510 07:04:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.510 07:04:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.510 07:04:52 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.510 07:04:52 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.510 07:04:52 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.510 07:04:52 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.510 07:04:52 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.510 07:04:52 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.510 07:04:52 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.510 07:04:52 event -- scripts/common.sh@344 -- # case "$op" in 00:04:17.510 07:04:52 event -- scripts/common.sh@345 -- # : 1 00:04:17.510 07:04:52 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.510 07:04:52 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.510 07:04:52 event -- scripts/common.sh@365 -- # decimal 1 00:04:17.510 07:04:52 event -- scripts/common.sh@353 -- # local d=1 00:04:17.510 07:04:52 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.510 07:04:52 event -- scripts/common.sh@355 -- # echo 1 00:04:17.510 07:04:52 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.510 07:04:52 event -- scripts/common.sh@366 -- # decimal 2 00:04:17.510 07:04:52 event -- scripts/common.sh@353 -- # local d=2 00:04:17.510 07:04:52 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.510 07:04:52 event -- scripts/common.sh@355 -- # echo 2 00:04:17.510 07:04:52 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.510 07:04:52 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.510 07:04:52 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.510 07:04:52 event -- scripts/common.sh@368 -- # return 0 00:04:17.510 07:04:52 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.510 07:04:52 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:17.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.510 --rc genhtml_branch_coverage=1 00:04:17.510 --rc genhtml_function_coverage=1 00:04:17.510 --rc genhtml_legend=1 00:04:17.510 --rc geninfo_all_blocks=1 00:04:17.510 --rc geninfo_unexecuted_blocks=1 00:04:17.510 00:04:17.510 ' 00:04:17.510 07:04:52 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:17.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.510 --rc genhtml_branch_coverage=1 00:04:17.510 --rc genhtml_function_coverage=1 00:04:17.510 --rc genhtml_legend=1 00:04:17.510 --rc geninfo_all_blocks=1 00:04:17.510 --rc geninfo_unexecuted_blocks=1 00:04:17.510 00:04:17.510 ' 00:04:17.510 07:04:52 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:17.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.510 --rc genhtml_branch_coverage=1 00:04:17.510 --rc genhtml_function_coverage=1 00:04:17.510 --rc genhtml_legend=1 00:04:17.510 --rc geninfo_all_blocks=1 00:04:17.510 --rc geninfo_unexecuted_blocks=1 00:04:17.510 00:04:17.510 ' 00:04:17.510 07:04:52 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:17.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.510 --rc genhtml_branch_coverage=1 00:04:17.510 --rc genhtml_function_coverage=1 00:04:17.510 --rc genhtml_legend=1 00:04:17.510 --rc geninfo_all_blocks=1 00:04:17.510 --rc geninfo_unexecuted_blocks=1 00:04:17.510 00:04:17.510 ' 00:04:17.510 07:04:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:17.510 07:04:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:17.510 07:04:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:17.510 07:04:52 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:17.510 07:04:52 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:17.510 07:04:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:17.510 ************************************ 00:04:17.510 START TEST event_perf 00:04:17.510 ************************************ 00:04:17.510 07:04:52 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:17.510 Running I/O for 1 seconds...[2024-11-20 07:04:52.140937] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:17.510 [2024-11-20 07:04:52.141033] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033645 ] 00:04:17.510 [2024-11-20 07:04:52.224663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:17.510 [2024-11-20 07:04:52.263429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.510 [2024-11-20 07:04:52.263542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:17.510 [2024-11-20 07:04:52.263697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.510 Running I/O for 1 seconds...[2024-11-20 07:04:52.263697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:18.897 00:04:18.897 lcore 0: 180670 00:04:18.897 lcore 1: 180671 00:04:18.897 lcore 2: 180669 00:04:18.897 lcore 3: 180672 00:04:18.897 done. 00:04:18.897 00:04:18.897 real 0m1.179s 00:04:18.897 user 0m4.097s 00:04:18.897 sys 0m0.079s 00:04:18.897 07:04:53 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:18.897 07:04:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:18.897 ************************************ 00:04:18.897 END TEST event_perf 00:04:18.897 ************************************ 00:04:18.897 07:04:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:18.897 07:04:53 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:18.897 07:04:53 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:18.897 07:04:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.897 ************************************ 00:04:18.897 START TEST event_reactor 00:04:18.897 ************************************ 00:04:18.897 07:04:53 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:18.897 [2024-11-20 07:04:53.400372] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:18.897 [2024-11-20 07:04:53.400468] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034005 ] 00:04:18.897 [2024-11-20 07:04:53.484100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.897 [2024-11-20 07:04:53.521348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.840 test_start 00:04:19.840 oneshot 00:04:19.840 tick 100 00:04:19.840 tick 100 00:04:19.840 tick 250 00:04:19.840 tick 100 00:04:19.840 tick 100 00:04:19.840 tick 250 00:04:19.840 tick 500 00:04:19.840 tick 100 00:04:19.840 tick 100 00:04:19.840 tick 100 00:04:19.840 tick 250 00:04:19.840 tick 100 00:04:19.840 tick 100 00:04:19.840 test_end 00:04:19.840 00:04:19.840 real 0m1.174s 00:04:19.840 user 0m1.094s 00:04:19.840 sys 0m0.077s 00:04:19.840 07:04:54 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:19.840 07:04:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:19.840 ************************************ 00:04:19.840 END TEST event_reactor 00:04:19.840 ************************************ 00:04:19.840 07:04:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:19.840 07:04:54 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:19.840 07:04:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:19.840 07:04:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.101 ************************************ 00:04:20.101 START TEST event_reactor_perf 00:04:20.101 ************************************ 00:04:20.101 07:04:54 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:20.101 [2024-11-20 07:04:54.655362] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:20.101 [2024-11-20 07:04:54.655469] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034142 ] 00:04:20.101 [2024-11-20 07:04:54.738259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.101 [2024-11-20 07:04:54.776297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.044 test_start 00:04:21.044 test_end 00:04:21.044 Performance: 367286 events per second 00:04:21.044 00:04:21.044 real 0m1.174s 00:04:21.044 user 0m1.096s 00:04:21.044 sys 0m0.074s 00:04:21.044 07:04:55 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:21.044 07:04:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:21.044 ************************************ 00:04:21.044 END TEST event_reactor_perf 00:04:21.044 ************************************ 00:04:21.305 07:04:55 event -- event/event.sh@49 -- # uname -s 00:04:21.305 07:04:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:21.305 07:04:55 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:21.305 07:04:55 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:21.305 07:04:55 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.305 07:04:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:21.305 ************************************ 00:04:21.305 START TEST event_scheduler 00:04:21.305 ************************************ 00:04:21.305 07:04:55 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:21.305 * Looking for test storage... 00:04:21.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:21.305 07:04:55 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:21.305 07:04:55 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:21.305 07:04:55 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:21.568 07:04:56 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.568 07:04:56 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:21.568 07:04:56 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.568 07:04:56 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:21.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.568 --rc genhtml_branch_coverage=1 00:04:21.568 --rc genhtml_function_coverage=1 00:04:21.568 --rc genhtml_legend=1 00:04:21.568 --rc geninfo_all_blocks=1 00:04:21.568 --rc geninfo_unexecuted_blocks=1 00:04:21.568 00:04:21.568 ' 00:04:21.568 07:04:56 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:21.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.568 --rc genhtml_branch_coverage=1 00:04:21.568 --rc genhtml_function_coverage=1 00:04:21.568 --rc genhtml_legend=1 00:04:21.568 --rc geninfo_all_blocks=1 00:04:21.568 --rc geninfo_unexecuted_blocks=1 00:04:21.568 00:04:21.568 ' 00:04:21.568 07:04:56 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:21.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.568 --rc genhtml_branch_coverage=1 00:04:21.568 --rc genhtml_function_coverage=1 00:04:21.568 --rc genhtml_legend=1 00:04:21.568 --rc geninfo_all_blocks=1 00:04:21.568 --rc geninfo_unexecuted_blocks=1 00:04:21.568 00:04:21.568 ' 00:04:21.568 07:04:56 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:21.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.568 --rc genhtml_branch_coverage=1 00:04:21.568 --rc genhtml_function_coverage=1 00:04:21.568 --rc genhtml_legend=1 00:04:21.568 --rc geninfo_all_blocks=1 00:04:21.568 --rc geninfo_unexecuted_blocks=1 00:04:21.568 00:04:21.568 ' 00:04:21.568 07:04:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:21.568 07:04:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1034431 00:04:21.568 07:04:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.568 07:04:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1034431 00:04:21.568 07:04:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:21.568 07:04:56 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 1034431 ']' 00:04:21.568 07:04:56 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.568 07:04:56 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:21.568 07:04:56 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.568 07:04:56 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:21.568 07:04:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:21.568 [2024-11-20 07:04:56.154270] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:21.568 [2024-11-20 07:04:56.154342] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034431 ] 00:04:21.568 [2024-11-20 07:04:56.227759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:21.568 [2024-11-20 07:04:56.267917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.568 [2024-11-20 07:04:56.268130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.568 [2024-11-20 07:04:56.268255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:21.568 [2024-11-20 07:04:56.268256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:22.511 07:04:56 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:22.511 07:04:56 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:22.511 07:04:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:22.511 07:04:56 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.511 07:04:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.511 [2024-11-20 07:04:56.970439] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:22.511 [2024-11-20 07:04:56.970454] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:22.511 [2024-11-20 07:04:56.970462] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:22.511 [2024-11-20 07:04:56.970466] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:22.511 [2024-11-20 07:04:56.970470] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:22.511 07:04:56 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.511 07:04:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:22.511 07:04:56 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.511 07:04:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.511 [2024-11-20 07:04:57.031289] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:22.511 07:04:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.511 07:04:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:22.511 07:04:57 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:22.511 07:04:57 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.511 07:04:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.511 ************************************ 00:04:22.511 START TEST scheduler_create_thread 00:04:22.511 ************************************ 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.511 2 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.511 3 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.511 4 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.511 5 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.511 6 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.511 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.511 7 00:04:22.512 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.512 07:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:22.512 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.512 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.512 8 00:04:22.512 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.512 07:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:22.512 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.512 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.512 9 00:04:22.512 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.512 07:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:22.512 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.512 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.084 10 00:04:23.084 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.084 07:04:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:23.084 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.084 07:04:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.469 07:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.469 07:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:24.469 07:04:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:24.469 07:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.469 07:04:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.040 07:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.040 07:04:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:25.040 07:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.040 07:04:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.984 07:05:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:25.984 07:05:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:25.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.984 07:05:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.555 07:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.555 00:04:26.555 real 0m4.223s 00:04:26.555 user 0m0.027s 00:04:26.555 sys 0m0.004s 00:04:26.555 07:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.555 07:05:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.555 ************************************ 00:04:26.555 END TEST scheduler_create_thread 00:04:26.555 ************************************ 00:04:26.815 07:05:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:26.815 07:05:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1034431 00:04:26.815 07:05:01 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 1034431 ']' 00:04:26.815 07:05:01 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 1034431 00:04:26.815 07:05:01 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:26.815 07:05:01 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:26.815 07:05:01 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1034431 00:04:26.815 07:05:01 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:26.815 07:05:01 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:26.815 07:05:01 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1034431' 00:04:26.815 killing process with pid 1034431 00:04:26.815 07:05:01 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 1034431 00:04:26.815 07:05:01 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 1034431 00:04:27.075 [2024-11-20 07:05:01.672750] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:27.075 00:04:27.075 real 0m5.939s 00:04:27.075 user 0m13.947s 00:04:27.075 sys 0m0.416s 00:04:27.075 07:05:01 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.075 07:05:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:27.075 ************************************ 00:04:27.075 END TEST event_scheduler 00:04:27.075 ************************************ 00:04:27.336 07:05:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:27.336 07:05:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:27.336 07:05:01 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.336 07:05:01 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.336 07:05:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:27.336 ************************************ 00:04:27.336 START TEST app_repeat 00:04:27.336 ************************************ 00:04:27.336 07:05:01 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1035813 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1035813' 00:04:27.336 Process app_repeat pid: 1035813 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:27.336 spdk_app_start Round 0 00:04:27.336 07:05:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1035813 /var/tmp/spdk-nbd.sock 00:04:27.336 07:05:01 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1035813 ']' 00:04:27.336 07:05:01 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:27.336 07:05:01 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:27.336 07:05:01 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:27.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:27.336 07:05:01 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:27.336 07:05:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:27.336 [2024-11-20 07:05:01.952358] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:27.336 [2024-11-20 07:05:01.952423] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035813 ] 00:04:27.336 [2024-11-20 07:05:02.032339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.336 [2024-11-20 07:05:02.068660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.336 [2024-11-20 07:05:02.068662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.596 07:05:02 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:27.596 07:05:02 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:27.596 07:05:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:27.596 Malloc0 00:04:27.596 07:05:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:27.856 Malloc1 00:04:27.856 07:05:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.856 07:05:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:28.117 /dev/nbd0 00:04:28.117 07:05:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:28.117 07:05:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:28.117 1+0 records in 00:04:28.117 1+0 records out 00:04:28.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233424 s, 17.5 MB/s 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:28.117 07:05:02 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:28.117 07:05:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:28.117 07:05:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.117 07:05:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:28.377 /dev/nbd1 00:04:28.377 07:05:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:28.377 07:05:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:28.377 1+0 records in 00:04:28.377 1+0 records out 00:04:28.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297566 s, 13.8 MB/s 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:28.377 07:05:02 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:28.377 07:05:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:28.377 07:05:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.377 07:05:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:28.377 07:05:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.378 07:05:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:28.378 07:05:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:28.378 { 00:04:28.378 "nbd_device": "/dev/nbd0", 00:04:28.378 "bdev_name": "Malloc0" 00:04:28.378 }, 00:04:28.378 { 00:04:28.378 "nbd_device": "/dev/nbd1", 00:04:28.378 "bdev_name": "Malloc1" 00:04:28.378 } 00:04:28.378 ]' 00:04:28.378 07:05:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:28.378 { 00:04:28.378 "nbd_device": "/dev/nbd0", 00:04:28.378 "bdev_name": "Malloc0" 00:04:28.378 }, 00:04:28.378 { 00:04:28.378 "nbd_device": "/dev/nbd1", 00:04:28.378 "bdev_name": "Malloc1" 00:04:28.378 } 00:04:28.378 ]' 00:04:28.378 07:05:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:28.637 /dev/nbd1' 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:28.637 /dev/nbd1' 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:28.637 256+0 records in 00:04:28.637 256+0 records out 00:04:28.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125502 s, 83.6 MB/s 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:28.637 256+0 records in 00:04:28.637 256+0 records out 00:04:28.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0379342 s, 27.6 MB/s 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:28.637 256+0 records in 00:04:28.637 256+0 records out 00:04:28.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171823 s, 61.0 MB/s 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.637 07:05:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.638 07:05:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:28.638 07:05:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:28.638 07:05:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:28.638 07:05:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:28.897 07:05:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:28.897 07:05:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:28.897 07:05:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:28.897 07:05:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:28.897 07:05:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:28.897 07:05:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:28.897 07:05:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:28.898 07:05:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:28.898 07:05:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:28.898 07:05:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:28.898 07:05:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:28.898 07:05:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:28.898 07:05:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:28.898 07:05:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:28.898 07:05:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:28.898 07:05:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:29.158 07:05:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:29.158 07:05:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:29.418 07:05:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:29.418 [2024-11-20 07:05:04.179977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.678 [2024-11-20 07:05:04.216267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.678 [2024-11-20 07:05:04.216269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.678 [2024-11-20 07:05:04.247987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:29.678 [2024-11-20 07:05:04.248023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:32.977 07:05:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:32.977 07:05:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:32.978 spdk_app_start Round 1 00:04:32.978 07:05:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1035813 /var/tmp/spdk-nbd.sock 00:04:32.978 07:05:07 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1035813 ']' 00:04:32.978 07:05:07 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:32.978 07:05:07 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:32.978 07:05:07 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:32.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:32.978 07:05:07 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:32.978 07:05:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:32.978 07:05:07 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:32.978 07:05:07 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:32.978 07:05:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.978 Malloc0 00:04:32.978 07:05:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.978 Malloc1 00:04:32.978 07:05:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.978 07:05:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:33.238 /dev/nbd0 00:04:33.238 07:05:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:33.238 07:05:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.238 1+0 records in 00:04:33.238 1+0 records out 00:04:33.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292192 s, 14.0 MB/s 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:33.238 07:05:07 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:33.238 07:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.238 07:05:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.238 07:05:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:33.239 /dev/nbd1 00:04:33.239 07:05:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:33.239 07:05:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:33.239 07:05:07 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:33.239 07:05:07 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:33.239 07:05:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:33.239 07:05:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:33.239 07:05:07 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:33.239 07:05:07 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:33.239 07:05:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:33.239 07:05:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:33.239 07:05:07 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.239 1+0 records in 00:04:33.239 1+0 records out 00:04:33.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294066 s, 13.9 MB/s 00:04:33.499 07:05:08 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.499 07:05:08 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:33.499 07:05:08 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.499 07:05:08 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:33.499 07:05:08 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:33.499 07:05:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.499 07:05:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.499 07:05:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.499 07:05:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.499 07:05:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.499 07:05:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:33.499 { 00:04:33.499 "nbd_device": "/dev/nbd0", 00:04:33.499 "bdev_name": "Malloc0" 00:04:33.499 }, 00:04:33.499 { 00:04:33.499 "nbd_device": "/dev/nbd1", 00:04:33.499 "bdev_name": "Malloc1" 00:04:33.499 } 00:04:33.499 ]' 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:33.500 { 00:04:33.500 "nbd_device": "/dev/nbd0", 00:04:33.500 "bdev_name": "Malloc0" 00:04:33.500 }, 00:04:33.500 { 00:04:33.500 "nbd_device": "/dev/nbd1", 00:04:33.500 "bdev_name": "Malloc1" 00:04:33.500 } 00:04:33.500 ]' 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:33.500 /dev/nbd1' 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:33.500 /dev/nbd1' 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:33.500 256+0 records in 00:04:33.500 256+0 records out 00:04:33.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125634 s, 83.5 MB/s 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.500 07:05:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:33.760 256+0 records in 00:04:33.760 256+0 records out 00:04:33.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161033 s, 65.1 MB/s 00:04:33.760 07:05:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.760 07:05:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:33.760 256+0 records in 00:04:33.760 256+0 records out 00:04:33.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175941 s, 59.6 MB/s 00:04:33.760 07:05:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:33.760 07:05:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.760 07:05:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.760 07:05:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:33.760 07:05:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.760 07:05:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:33.760 07:05:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:33.760 07:05:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.761 07:05:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:34.021 07:05:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:34.022 07:05:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:34.022 07:05:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:34.022 07:05:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.022 07:05:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.022 07:05:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:34.022 07:05:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.022 07:05:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.022 07:05:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.022 07:05:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.022 07:05:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.282 07:05:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:34.282 07:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:34.282 07:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.282 07:05:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:34.282 07:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:34.282 07:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.282 07:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:34.282 07:05:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:34.282 07:05:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:34.282 07:05:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:34.282 07:05:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:34.282 07:05:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:34.282 07:05:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:34.543 07:05:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:34.543 [2024-11-20 07:05:09.216308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:34.543 [2024-11-20 07:05:09.252260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.543 [2024-11-20 07:05:09.252262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.543 [2024-11-20 07:05:09.284724] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:34.543 [2024-11-20 07:05:09.284759] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:37.844 07:05:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:37.844 07:05:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:37.844 spdk_app_start Round 2 00:04:37.844 07:05:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1035813 /var/tmp/spdk-nbd.sock 00:04:37.844 07:05:12 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1035813 ']' 00:04:37.844 07:05:12 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:37.844 07:05:12 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:37.844 07:05:12 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:37.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:37.844 07:05:12 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:37.844 07:05:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:37.844 07:05:12 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:37.844 07:05:12 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:37.844 07:05:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.844 Malloc0 00:04:37.844 07:05:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.844 Malloc1 00:04:37.844 07:05:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.844 07:05:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.844 07:05:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.844 07:05:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:37.844 07:05:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.844 07:05:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:37.844 07:05:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.844 07:05:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.844 07:05:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.105 07:05:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:38.105 07:05:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.105 07:05:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:38.105 07:05:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:38.105 07:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:38.105 07:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.105 07:05:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:38.105 /dev/nbd0 00:04:38.105 07:05:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:38.105 07:05:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.105 1+0 records in 00:04:38.105 1+0 records out 00:04:38.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184808 s, 22.2 MB/s 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:38.105 07:05:12 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:38.105 07:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.105 07:05:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.105 07:05:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:38.366 /dev/nbd1 00:04:38.366 07:05:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:38.366 07:05:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:38.366 07:05:13 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:38.366 07:05:13 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:38.366 07:05:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:38.366 07:05:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:38.366 07:05:13 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:38.366 07:05:13 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:38.366 07:05:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:38.366 07:05:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:38.366 07:05:13 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.366 1+0 records in 00:04:38.366 1+0 records out 00:04:38.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286153 s, 14.3 MB/s 00:04:38.366 07:05:13 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:38.366 07:05:13 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:38.367 07:05:13 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:38.367 07:05:13 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:38.367 07:05:13 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:38.367 07:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.367 07:05:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.367 07:05:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.367 07:05:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.367 07:05:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:38.628 { 00:04:38.628 "nbd_device": "/dev/nbd0", 00:04:38.628 "bdev_name": "Malloc0" 00:04:38.628 }, 00:04:38.628 { 00:04:38.628 "nbd_device": "/dev/nbd1", 00:04:38.628 "bdev_name": "Malloc1" 00:04:38.628 } 00:04:38.628 ]' 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:38.628 { 00:04:38.628 "nbd_device": "/dev/nbd0", 00:04:38.628 "bdev_name": "Malloc0" 00:04:38.628 }, 00:04:38.628 { 00:04:38.628 "nbd_device": "/dev/nbd1", 00:04:38.628 "bdev_name": "Malloc1" 00:04:38.628 } 00:04:38.628 ]' 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:38.628 /dev/nbd1' 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:38.628 /dev/nbd1' 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:38.628 256+0 records in 00:04:38.628 256+0 records out 00:04:38.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011783 s, 89.0 MB/s 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:38.628 256+0 records in 00:04:38.628 256+0 records out 00:04:38.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218544 s, 48.0 MB/s 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:38.628 256+0 records in 00:04:38.628 256+0 records out 00:04:38.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214742 s, 48.8 MB/s 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:38.628 07:05:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.629 07:05:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.629 07:05:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:38.629 07:05:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:38.629 07:05:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.629 07:05:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:38.890 07:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:38.890 07:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:38.890 07:05:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:38.890 07:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.890 07:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.890 07:05:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:38.890 07:05:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:38.890 07:05:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.890 07:05:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.890 07:05:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:39.151 07:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:39.151 07:05:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:39.151 07:05:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:39.151 07:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.151 07:05:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.151 07:05:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:39.151 07:05:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:39.151 07:05:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.151 07:05:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.151 07:05:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.151 07:05:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.411 07:05:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:39.411 07:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:39.411 07:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.411 07:05:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:39.411 07:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:39.411 07:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.411 07:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:39.411 07:05:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:39.411 07:05:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:39.411 07:05:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:39.411 07:05:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:39.411 07:05:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:39.411 07:05:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:39.411 07:05:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:39.672 [2024-11-20 07:05:14.284600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.672 [2024-11-20 07:05:14.320658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.672 [2024-11-20 07:05:14.320661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.672 [2024-11-20 07:05:14.352407] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:39.672 [2024-11-20 07:05:14.352449] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:42.971 07:05:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1035813 /var/tmp/spdk-nbd.sock 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1035813 ']' 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:42.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:42.971 07:05:17 event.app_repeat -- event/event.sh@39 -- # killprocess 1035813 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 1035813 ']' 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 1035813 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1035813 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1035813' 00:04:42.971 killing process with pid 1035813 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@971 -- # kill 1035813 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@976 -- # wait 1035813 00:04:42.971 spdk_app_start is called in Round 0. 00:04:42.971 Shutdown signal received, stop current app iteration 00:04:42.971 Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 reinitialization... 00:04:42.971 spdk_app_start is called in Round 1. 00:04:42.971 Shutdown signal received, stop current app iteration 00:04:42.971 Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 reinitialization... 00:04:42.971 spdk_app_start is called in Round 2. 00:04:42.971 Shutdown signal received, stop current app iteration 00:04:42.971 Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 reinitialization... 00:04:42.971 spdk_app_start is called in Round 3. 00:04:42.971 Shutdown signal received, stop current app iteration 00:04:42.971 07:05:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:42.971 07:05:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:42.971 00:04:42.971 real 0m15.590s 00:04:42.971 user 0m33.940s 00:04:42.971 sys 0m2.270s 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.971 07:05:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:42.971 ************************************ 00:04:42.971 END TEST app_repeat 00:04:42.971 ************************************ 00:04:42.971 07:05:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:42.971 07:05:17 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:42.971 07:05:17 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.971 07:05:17 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.971 07:05:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.971 ************************************ 00:04:42.971 START TEST cpu_locks 00:04:42.971 ************************************ 00:04:42.971 07:05:17 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:42.971 * Looking for test storage... 00:04:42.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:42.971 07:05:17 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:42.971 07:05:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:42.971 07:05:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:43.232 07:05:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.232 07:05:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:43.232 07:05:17 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.232 07:05:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:43.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.232 --rc genhtml_branch_coverage=1 00:04:43.232 --rc genhtml_function_coverage=1 00:04:43.232 --rc genhtml_legend=1 00:04:43.232 --rc geninfo_all_blocks=1 00:04:43.232 --rc geninfo_unexecuted_blocks=1 00:04:43.232 00:04:43.232 ' 00:04:43.232 07:05:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:43.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.232 --rc genhtml_branch_coverage=1 00:04:43.232 --rc genhtml_function_coverage=1 00:04:43.232 --rc genhtml_legend=1 00:04:43.232 --rc geninfo_all_blocks=1 00:04:43.232 --rc geninfo_unexecuted_blocks=1 00:04:43.232 00:04:43.232 ' 00:04:43.232 07:05:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:43.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.232 --rc genhtml_branch_coverage=1 00:04:43.232 --rc genhtml_function_coverage=1 00:04:43.232 --rc genhtml_legend=1 00:04:43.232 --rc geninfo_all_blocks=1 00:04:43.232 --rc geninfo_unexecuted_blocks=1 00:04:43.232 00:04:43.232 ' 00:04:43.232 07:05:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:43.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.232 --rc genhtml_branch_coverage=1 00:04:43.232 --rc genhtml_function_coverage=1 00:04:43.232 --rc genhtml_legend=1 00:04:43.232 --rc geninfo_all_blocks=1 00:04:43.232 --rc geninfo_unexecuted_blocks=1 00:04:43.232 00:04:43.232 ' 00:04:43.232 07:05:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:43.233 07:05:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:43.233 07:05:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:43.233 07:05:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:43.233 07:05:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:43.233 07:05:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.233 07:05:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.233 ************************************ 00:04:43.233 START TEST default_locks 00:04:43.233 ************************************ 00:04:43.233 07:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:43.233 07:05:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1039079 00:04:43.233 07:05:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1039079 00:04:43.233 07:05:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.233 07:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 1039079 ']' 00:04:43.233 07:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.233 07:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:43.233 07:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.233 07:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:43.233 07:05:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.233 [2024-11-20 07:05:17.880394] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:43.233 [2024-11-20 07:05:17.880444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039079 ] 00:04:43.233 [2024-11-20 07:05:17.958285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.233 [2024-11-20 07:05:17.994946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.172 07:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.173 07:05:18 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:44.173 07:05:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1039079 00:04:44.173 07:05:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1039079 00:04:44.173 07:05:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.433 lslocks: write error 00:04:44.433 07:05:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1039079 00:04:44.433 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 1039079 ']' 00:04:44.433 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 1039079 00:04:44.433 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:04:44.433 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:44.433 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1039079 00:04:44.433 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:44.433 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:44.433 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1039079' 00:04:44.433 killing process with pid 1039079 00:04:44.433 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 1039079 00:04:44.433 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 1039079 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1039079 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1039079 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1039079 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 1039079 ']' 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1039079) - No such process 00:04:44.694 ERROR: process (pid: 1039079) is no longer running 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:44.694 00:04:44.694 real 0m1.544s 00:04:44.694 user 0m1.653s 00:04:44.694 sys 0m0.529s 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.694 07:05:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.694 ************************************ 00:04:44.694 END TEST default_locks 00:04:44.694 ************************************ 00:04:44.694 07:05:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:44.694 07:05:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:44.694 07:05:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.694 07:05:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.694 ************************************ 00:04:44.694 START TEST default_locks_via_rpc 00:04:44.694 ************************************ 00:04:44.694 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:04:44.694 07:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.694 07:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1039446 00:04:44.694 07:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1039446 00:04:44.694 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1039446 ']' 00:04:44.694 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.694 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:44.694 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.694 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:44.694 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.954 [2024-11-20 07:05:19.467074] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:44.954 [2024-11-20 07:05:19.467111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039446 ] 00:04:44.954 [2024-11-20 07:05:19.534739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.954 [2024-11-20 07:05:19.570381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1039446 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1039446 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1039446 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 1039446 ']' 00:04:45.215 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 1039446 00:04:45.476 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:04:45.476 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:45.476 07:05:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1039446 00:04:45.476 07:05:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:45.476 07:05:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:45.476 07:05:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1039446' 00:04:45.476 killing process with pid 1039446 00:04:45.476 07:05:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 1039446 00:04:45.476 07:05:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 1039446 00:04:45.476 00:04:45.476 real 0m0.797s 00:04:45.476 user 0m0.805s 00:04:45.476 sys 0m0.365s 00:04:45.476 07:05:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:45.476 07:05:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.476 ************************************ 00:04:45.476 END TEST default_locks_via_rpc 00:04:45.476 ************************************ 00:04:45.736 07:05:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:45.736 07:05:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:45.736 07:05:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.736 07:05:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.736 ************************************ 00:04:45.736 START TEST non_locking_app_on_locked_coremask 00:04:45.736 ************************************ 00:04:45.736 07:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:04:45.736 07:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1039712 00:04:45.737 07:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1039712 /var/tmp/spdk.sock 00:04:45.737 07:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.737 07:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1039712 ']' 00:04:45.737 07:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.737 07:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:45.737 07:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.737 07:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:45.737 07:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.737 [2024-11-20 07:05:20.377232] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:45.737 [2024-11-20 07:05:20.377295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039712 ] 00:04:45.737 [2024-11-20 07:05:20.458126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.737 [2024-11-20 07:05:20.500557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.678 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:46.678 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:46.678 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1039821 00:04:46.678 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:46.678 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1039821 /var/tmp/spdk2.sock 00:04:46.678 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1039821 ']' 00:04:46.678 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.678 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:46.678 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.678 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:46.678 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.678 [2024-11-20 07:05:21.208885] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:46.678 [2024-11-20 07:05:21.208939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039821 ] 00:04:46.678 [2024-11-20 07:05:21.328529] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.678 [2024-11-20 07:05:21.328557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.678 [2024-11-20 07:05:21.400688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.249 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.249 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:47.249 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1039712 00:04:47.249 07:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1039712 00:04:47.249 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:47.820 lslocks: write error 00:04:47.820 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1039712 00:04:47.820 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1039712 ']' 00:04:47.820 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1039712 00:04:47.820 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:47.820 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:47.820 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1039712 00:04:48.081 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:48.081 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:48.081 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1039712' 00:04:48.081 killing process with pid 1039712 00:04:48.081 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1039712 00:04:48.081 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1039712 00:04:48.341 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1039821 00:04:48.341 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1039821 ']' 00:04:48.341 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1039821 00:04:48.341 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:48.341 07:05:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:48.341 07:05:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1039821 00:04:48.341 07:05:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:48.341 07:05:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:48.341 07:05:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1039821' 00:04:48.341 killing process with pid 1039821 00:04:48.341 07:05:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1039821 00:04:48.341 07:05:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1039821 00:04:48.602 00:04:48.602 real 0m2.958s 00:04:48.602 user 0m3.271s 00:04:48.602 sys 0m0.913s 00:04:48.602 07:05:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.602 07:05:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.602 ************************************ 00:04:48.602 END TEST non_locking_app_on_locked_coremask 00:04:48.602 ************************************ 00:04:48.602 07:05:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:48.602 07:05:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.602 07:05:23 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.602 07:05:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.602 ************************************ 00:04:48.602 START TEST locking_app_on_unlocked_coremask 00:04:48.602 ************************************ 00:04:48.602 07:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:04:48.602 07:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1040247 00:04:48.602 07:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1040247 /var/tmp/spdk.sock 00:04:48.602 07:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:48.602 07:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1040247 ']' 00:04:48.602 07:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.602 07:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.602 07:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.602 07:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.602 07:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.863 [2024-11-20 07:05:23.400872] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:48.863 [2024-11-20 07:05:23.400926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040247 ] 00:04:48.863 [2024-11-20 07:05:23.479191] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:48.863 [2024-11-20 07:05:23.479220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.863 [2024-11-20 07:05:23.516803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.434 07:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.435 07:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:49.435 07:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1040526 00:04:49.435 07:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1040526 /var/tmp/spdk2.sock 00:04:49.435 07:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1040526 ']' 00:04:49.435 07:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:49.435 07:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:49.435 07:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:49.435 07:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:49.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:49.435 07:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:49.435 07:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.695 [2024-11-20 07:05:24.241295] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:49.695 [2024-11-20 07:05:24.241353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040526 ] 00:04:49.695 [2024-11-20 07:05:24.364222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.695 [2024-11-20 07:05:24.436714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.266 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.266 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:50.266 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1040526 00:04:50.266 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1040526 00:04:50.266 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.839 lslocks: write error 00:04:50.839 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1040247 00:04:50.839 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1040247 ']' 00:04:50.839 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 1040247 00:04:50.839 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:50.839 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:50.839 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1040247 00:04:50.839 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:50.839 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:50.839 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1040247' 00:04:50.839 killing process with pid 1040247 00:04:50.839 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 1040247 00:04:50.839 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 1040247 00:04:51.421 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1040526 00:04:51.421 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1040526 ']' 00:04:51.421 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 1040526 00:04:51.421 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:51.421 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:51.421 07:05:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1040526 00:04:51.421 07:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:51.421 07:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:51.421 07:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1040526' 00:04:51.421 killing process with pid 1040526 00:04:51.421 07:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 1040526 00:04:51.421 07:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 1040526 00:04:51.682 00:04:51.682 real 0m2.919s 00:04:51.682 user 0m3.225s 00:04:51.682 sys 0m0.882s 00:04:51.682 07:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.682 07:05:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.682 ************************************ 00:04:51.682 END TEST locking_app_on_unlocked_coremask 00:04:51.682 ************************************ 00:04:51.682 07:05:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:51.682 07:05:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.682 07:05:26 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.682 07:05:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.682 ************************************ 00:04:51.682 START TEST locking_app_on_locked_coremask 00:04:51.682 ************************************ 00:04:51.682 07:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:04:51.682 07:05:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1040906 00:04:51.682 07:05:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1040906 /var/tmp/spdk.sock 00:04:51.682 07:05:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.682 07:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1040906 ']' 00:04:51.682 07:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.682 07:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:51.682 07:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.682 07:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:51.682 07:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.682 [2024-11-20 07:05:26.404122] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:51.682 [2024-11-20 07:05:26.404170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040906 ] 00:04:51.943 [2024-11-20 07:05:26.481686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.943 [2024-11-20 07:05:26.516268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1041208 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1041208 /var/tmp/spdk2.sock 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1041208 /var/tmp/spdk2.sock 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1041208 /var/tmp/spdk2.sock 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1041208 ']' 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.516 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.516 [2024-11-20 07:05:27.246841] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:52.516 [2024-11-20 07:05:27.246902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041208 ] 00:04:52.778 [2024-11-20 07:05:27.371984] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1040906 has claimed it. 00:04:52.778 [2024-11-20 07:05:27.372026] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:53.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1041208) - No such process 00:04:53.350 ERROR: process (pid: 1041208) is no longer running 00:04:53.350 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.350 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:53.350 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:53.350 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.350 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:53.350 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.350 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1040906 00:04:53.350 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1040906 00:04:53.350 07:05:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.610 lslocks: write error 00:04:53.610 07:05:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1040906 00:04:53.610 07:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1040906 ']' 00:04:53.611 07:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1040906 00:04:53.611 07:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:53.611 07:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:53.611 07:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1040906 00:04:53.611 07:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:53.611 07:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:53.611 07:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1040906' 00:04:53.611 killing process with pid 1040906 00:04:53.611 07:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1040906 00:04:53.611 07:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1040906 00:04:53.872 00:04:53.872 real 0m2.200s 00:04:53.872 user 0m2.493s 00:04:53.872 sys 0m0.615s 00:04:53.872 07:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.872 07:05:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.872 ************************************ 00:04:53.872 END TEST locking_app_on_locked_coremask 00:04:53.872 ************************************ 00:04:53.872 07:05:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:53.872 07:05:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:53.872 07:05:28 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.872 07:05:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.872 ************************************ 00:04:53.872 START TEST locking_overlapped_coremask 00:04:53.872 ************************************ 00:04:53.872 07:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:04:53.872 07:05:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1041440 00:04:53.872 07:05:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1041440 /var/tmp/spdk.sock 00:04:53.872 07:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 1041440 ']' 00:04:53.872 07:05:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:53.872 07:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.872 07:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.872 07:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.872 07:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.872 07:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.133 [2024-11-20 07:05:28.682304] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:54.133 [2024-11-20 07:05:28.682370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041440 ] 00:04:54.133 [2024-11-20 07:05:28.768622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:54.133 [2024-11-20 07:05:28.812762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.134 [2024-11-20 07:05:28.812884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.134 [2024-11-20 07:05:28.812888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1041614 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1041614 /var/tmp/spdk2.sock 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1041614 /var/tmp/spdk2.sock 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1041614 /var/tmp/spdk2.sock 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 1041614 ']' 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.074 07:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.074 [2024-11-20 07:05:29.530320] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:55.074 [2024-11-20 07:05:29.530374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041614 ] 00:04:55.074 [2024-11-20 07:05:29.627608] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1041440 has claimed it. 00:04:55.074 [2024-11-20 07:05:29.627643] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:55.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1041614) - No such process 00:04:55.684 ERROR: process (pid: 1041614) is no longer running 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1041440 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 1041440 ']' 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 1041440 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1041440 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1041440' 00:04:55.684 killing process with pid 1041440 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 1041440 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 1041440 00:04:55.684 00:04:55.684 real 0m1.821s 00:04:55.684 user 0m5.220s 00:04:55.684 sys 0m0.392s 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.684 07:05:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.684 ************************************ 00:04:55.684 END TEST locking_overlapped_coremask 00:04:55.684 ************************************ 00:04:56.000 07:05:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:56.000 07:05:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.000 07:05:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.000 07:05:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.000 ************************************ 00:04:56.000 START TEST locking_overlapped_coremask_via_rpc 00:04:56.000 ************************************ 00:04:56.000 07:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:04:56.000 07:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1041903 00:04:56.000 07:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1041903 /var/tmp/spdk.sock 00:04:56.000 07:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:56.000 07:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1041903 ']' 00:04:56.000 07:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.000 07:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.000 07:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.000 07:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.000 07:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.000 [2024-11-20 07:05:30.571739] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:56.000 [2024-11-20 07:05:30.571795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041903 ] 00:04:56.000 [2024-11-20 07:05:30.653705] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.000 [2024-11-20 07:05:30.653739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.000 [2024-11-20 07:05:30.696664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.000 [2024-11-20 07:05:30.696779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.000 [2024-11-20 07:05:30.696782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.956 07:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.956 07:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:56.956 07:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1041992 00:04:56.956 07:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1041992 /var/tmp/spdk2.sock 00:04:56.956 07:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1041992 ']' 00:04:56.956 07:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:56.956 07:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.956 07:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.956 07:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.956 07:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.956 07:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.956 [2024-11-20 07:05:31.436555] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:56.956 [2024-11-20 07:05:31.436608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041992 ] 00:04:56.956 [2024-11-20 07:05:31.529245] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.956 [2024-11-20 07:05:31.529265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.956 [2024-11-20 07:05:31.592205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.956 [2024-11-20 07:05:31.595985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.956 [2024-11-20 07:05:31.595987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.529 [2024-11-20 07:05:32.236924] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1041903 has claimed it. 00:04:57.529 request: 00:04:57.529 { 00:04:57.529 "method": "framework_enable_cpumask_locks", 00:04:57.529 "req_id": 1 00:04:57.529 } 00:04:57.529 Got JSON-RPC error response 00:04:57.529 response: 00:04:57.529 { 00:04:57.529 "code": -32603, 00:04:57.529 "message": "Failed to claim CPU core: 2" 00:04:57.529 } 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1041903 /var/tmp/spdk.sock 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1041903 ']' 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.529 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.790 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.790 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:57.790 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1041992 /var/tmp/spdk2.sock 00:04:57.790 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1041992 ']' 00:04:57.790 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.790 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.790 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.790 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.790 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.051 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:58.051 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:58.051 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:58.051 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:58.051 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:58.051 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:58.051 00:04:58.051 real 0m2.097s 00:04:58.051 user 0m0.874s 00:04:58.051 sys 0m0.151s 00:04:58.051 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.051 07:05:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.051 ************************************ 00:04:58.051 END TEST locking_overlapped_coremask_via_rpc 00:04:58.051 ************************************ 00:04:58.051 07:05:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:58.051 07:05:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1041903 ]] 00:04:58.051 07:05:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1041903 00:04:58.051 07:05:32 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1041903 ']' 00:04:58.051 07:05:32 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1041903 00:04:58.051 07:05:32 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:04:58.051 07:05:32 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:58.051 07:05:32 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1041903 00:04:58.051 07:05:32 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:58.051 07:05:32 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:58.051 07:05:32 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1041903' 00:04:58.051 killing process with pid 1041903 00:04:58.051 07:05:32 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 1041903 00:04:58.051 07:05:32 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 1041903 00:04:58.312 07:05:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1041992 ]] 00:04:58.312 07:05:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1041992 00:04:58.312 07:05:32 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1041992 ']' 00:04:58.312 07:05:32 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1041992 00:04:58.312 07:05:32 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:04:58.312 07:05:32 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:58.312 07:05:32 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1041992 00:04:58.312 07:05:32 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:58.312 07:05:32 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:58.312 07:05:32 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1041992' 00:04:58.312 killing process with pid 1041992 00:04:58.312 07:05:32 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 1041992 00:04:58.312 07:05:32 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 1041992 00:04:58.573 07:05:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:58.573 07:05:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:58.573 07:05:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1041903 ]] 00:04:58.573 07:05:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1041903 00:04:58.573 07:05:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1041903 ']' 00:04:58.573 07:05:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1041903 00:04:58.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1041903) - No such process 00:04:58.573 07:05:33 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 1041903 is not found' 00:04:58.573 Process with pid 1041903 is not found 00:04:58.573 07:05:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1041992 ]] 00:04:58.573 07:05:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1041992 00:04:58.573 07:05:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1041992 ']' 00:04:58.573 07:05:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1041992 00:04:58.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1041992) - No such process 00:04:58.573 07:05:33 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 1041992 is not found' 00:04:58.573 Process with pid 1041992 is not found 00:04:58.573 07:05:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:58.573 00:04:58.573 real 0m15.607s 00:04:58.573 user 0m27.746s 00:04:58.573 sys 0m4.772s 00:04:58.573 07:05:33 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.573 07:05:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.573 ************************************ 00:04:58.573 END TEST cpu_locks 00:04:58.573 ************************************ 00:04:58.573 00:04:58.573 real 0m41.339s 00:04:58.573 user 1m22.232s 00:04:58.573 sys 0m8.083s 00:04:58.573 07:05:33 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.573 07:05:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.573 ************************************ 00:04:58.573 END TEST event 00:04:58.573 ************************************ 00:04:58.573 07:05:33 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:58.573 07:05:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.573 07:05:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.573 07:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:58.573 ************************************ 00:04:58.573 START TEST thread 00:04:58.573 ************************************ 00:04:58.573 07:05:33 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:58.835 * Looking for test storage... 00:04:58.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:58.835 07:05:33 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.835 07:05:33 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.835 07:05:33 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.835 07:05:33 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.835 07:05:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.835 07:05:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.835 07:05:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.835 07:05:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.835 07:05:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.835 07:05:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.835 07:05:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.835 07:05:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.835 07:05:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.835 07:05:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.835 07:05:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.835 07:05:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:58.835 07:05:33 thread -- scripts/common.sh@345 -- # : 1 00:04:58.835 07:05:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.835 07:05:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.835 07:05:33 thread -- scripts/common.sh@365 -- # decimal 1 00:04:58.835 07:05:33 thread -- scripts/common.sh@353 -- # local d=1 00:04:58.835 07:05:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.835 07:05:33 thread -- scripts/common.sh@355 -- # echo 1 00:04:58.835 07:05:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.835 07:05:33 thread -- scripts/common.sh@366 -- # decimal 2 00:04:58.835 07:05:33 thread -- scripts/common.sh@353 -- # local d=2 00:04:58.835 07:05:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.835 07:05:33 thread -- scripts/common.sh@355 -- # echo 2 00:04:58.835 07:05:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.835 07:05:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.835 07:05:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.835 07:05:33 thread -- scripts/common.sh@368 -- # return 0 00:04:58.835 07:05:33 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.835 07:05:33 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.835 --rc genhtml_branch_coverage=1 00:04:58.835 --rc genhtml_function_coverage=1 00:04:58.835 --rc genhtml_legend=1 00:04:58.835 --rc geninfo_all_blocks=1 00:04:58.835 --rc geninfo_unexecuted_blocks=1 00:04:58.835 00:04:58.835 ' 00:04:58.835 07:05:33 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.835 --rc genhtml_branch_coverage=1 00:04:58.835 --rc genhtml_function_coverage=1 00:04:58.835 --rc genhtml_legend=1 00:04:58.835 --rc geninfo_all_blocks=1 00:04:58.835 --rc geninfo_unexecuted_blocks=1 00:04:58.835 00:04:58.835 ' 00:04:58.835 07:05:33 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.835 --rc genhtml_branch_coverage=1 00:04:58.835 --rc genhtml_function_coverage=1 00:04:58.835 --rc genhtml_legend=1 00:04:58.835 --rc geninfo_all_blocks=1 00:04:58.835 --rc geninfo_unexecuted_blocks=1 00:04:58.835 00:04:58.835 ' 00:04:58.835 07:05:33 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.835 --rc genhtml_branch_coverage=1 00:04:58.835 --rc genhtml_function_coverage=1 00:04:58.835 --rc genhtml_legend=1 00:04:58.835 --rc geninfo_all_blocks=1 00:04:58.835 --rc geninfo_unexecuted_blocks=1 00:04:58.835 00:04:58.835 ' 00:04:58.835 07:05:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:58.835 07:05:33 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:04:58.835 07:05:33 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.835 07:05:33 thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.835 ************************************ 00:04:58.835 START TEST thread_poller_perf 00:04:58.835 ************************************ 00:04:58.835 07:05:33 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:58.835 [2024-11-20 07:05:33.564985] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:04:58.835 [2024-11-20 07:05:33.565097] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042487 ] 00:04:59.095 [2024-11-20 07:05:33.650582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.095 [2024-11-20 07:05:33.692499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.095 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:00.035 [2024-11-20T06:05:34.802Z] ====================================== 00:05:00.035 [2024-11-20T06:05:34.802Z] busy:2412025546 (cyc) 00:05:00.035 [2024-11-20T06:05:34.802Z] total_run_count: 288000 00:05:00.035 [2024-11-20T06:05:34.802Z] tsc_hz: 2400000000 (cyc) 00:05:00.035 [2024-11-20T06:05:34.802Z] ====================================== 00:05:00.035 [2024-11-20T06:05:34.802Z] poller_cost: 8375 (cyc), 3489 (nsec) 00:05:00.035 00:05:00.035 real 0m1.192s 00:05:00.035 user 0m1.107s 00:05:00.035 sys 0m0.080s 00:05:00.035 07:05:34 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.035 07:05:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.035 ************************************ 00:05:00.035 END TEST thread_poller_perf 00:05:00.035 ************************************ 00:05:00.035 07:05:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:00.035 07:05:34 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:00.035 07:05:34 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.035 07:05:34 thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.296 ************************************ 00:05:00.296 START TEST thread_poller_perf 00:05:00.296 ************************************ 00:05:00.296 07:05:34 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:00.296 [2024-11-20 07:05:34.836943] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:05:00.296 [2024-11-20 07:05:34.837041] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042799 ] 00:05:00.296 [2024-11-20 07:05:34.919479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.296 [2024-11-20 07:05:34.957874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.296 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:01.239 [2024-11-20T06:05:36.006Z] ====================================== 00:05:01.239 [2024-11-20T06:05:36.006Z] busy:2402229958 (cyc) 00:05:01.239 [2024-11-20T06:05:36.006Z] total_run_count: 3812000 00:05:01.239 [2024-11-20T06:05:36.006Z] tsc_hz: 2400000000 (cyc) 00:05:01.239 [2024-11-20T06:05:36.006Z] ====================================== 00:05:01.239 [2024-11-20T06:05:36.006Z] poller_cost: 630 (cyc), 262 (nsec) 00:05:01.239 00:05:01.239 real 0m1.177s 00:05:01.239 user 0m1.103s 00:05:01.239 sys 0m0.069s 00:05:01.239 07:05:35 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.239 07:05:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.239 ************************************ 00:05:01.239 END TEST thread_poller_perf 00:05:01.239 ************************************ 00:05:01.499 07:05:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:01.499 00:05:01.499 real 0m2.725s 00:05:01.499 user 0m2.378s 00:05:01.499 sys 0m0.354s 00:05:01.499 07:05:36 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.499 07:05:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.499 ************************************ 00:05:01.499 END TEST thread 00:05:01.499 ************************************ 00:05:01.499 07:05:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:01.499 07:05:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:01.499 07:05:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.499 07:05:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.499 07:05:36 -- common/autotest_common.sh@10 -- # set +x 00:05:01.499 ************************************ 00:05:01.499 START TEST app_cmdline 00:05:01.499 ************************************ 00:05:01.499 07:05:36 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:01.499 * Looking for test storage... 00:05:01.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:01.500 07:05:36 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.500 07:05:36 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.500 07:05:36 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.760 07:05:36 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.760 07:05:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:01.760 07:05:36 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.760 07:05:36 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.760 --rc genhtml_branch_coverage=1 00:05:01.760 --rc genhtml_function_coverage=1 00:05:01.760 --rc genhtml_legend=1 00:05:01.760 --rc geninfo_all_blocks=1 00:05:01.760 --rc geninfo_unexecuted_blocks=1 00:05:01.760 00:05:01.760 ' 00:05:01.760 07:05:36 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.760 --rc genhtml_branch_coverage=1 00:05:01.760 --rc genhtml_function_coverage=1 00:05:01.760 --rc genhtml_legend=1 00:05:01.760 --rc geninfo_all_blocks=1 00:05:01.760 --rc geninfo_unexecuted_blocks=1 00:05:01.760 00:05:01.760 ' 00:05:01.760 07:05:36 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.760 --rc genhtml_branch_coverage=1 00:05:01.760 --rc genhtml_function_coverage=1 00:05:01.760 --rc genhtml_legend=1 00:05:01.760 --rc geninfo_all_blocks=1 00:05:01.760 --rc geninfo_unexecuted_blocks=1 00:05:01.760 00:05:01.760 ' 00:05:01.760 07:05:36 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.760 --rc genhtml_branch_coverage=1 00:05:01.760 --rc genhtml_function_coverage=1 00:05:01.760 --rc genhtml_legend=1 00:05:01.760 --rc geninfo_all_blocks=1 00:05:01.760 --rc geninfo_unexecuted_blocks=1 00:05:01.760 00:05:01.760 ' 00:05:01.760 07:05:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:01.760 07:05:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:01.760 07:05:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1043194 00:05:01.760 07:05:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1043194 00:05:01.760 07:05:36 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 1043194 ']' 00:05:01.760 07:05:36 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.760 07:05:36 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:01.760 07:05:36 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.760 07:05:36 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:01.760 07:05:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:01.760 [2024-11-20 07:05:36.349649] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:05:01.760 [2024-11-20 07:05:36.349726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043194 ] 00:05:01.760 [2024-11-20 07:05:36.432089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.760 [2024-11-20 07:05:36.473871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:02.703 07:05:37 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:02.703 { 00:05:02.703 "version": "SPDK v25.01-pre git sha1 8ccf9ce7b", 00:05:02.703 "fields": { 00:05:02.703 "major": 25, 00:05:02.703 "minor": 1, 00:05:02.703 "patch": 0, 00:05:02.703 "suffix": "-pre", 00:05:02.703 "commit": "8ccf9ce7b" 00:05:02.703 } 00:05:02.703 } 00:05:02.703 07:05:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:02.703 07:05:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:02.703 07:05:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:02.703 07:05:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:02.703 07:05:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:02.703 07:05:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:02.703 07:05:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.703 07:05:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:02.703 07:05:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:02.703 07:05:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:02.703 07:05:37 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:02.964 request: 00:05:02.964 { 00:05:02.964 "method": "env_dpdk_get_mem_stats", 00:05:02.964 "req_id": 1 00:05:02.964 } 00:05:02.964 Got JSON-RPC error response 00:05:02.964 response: 00:05:02.964 { 00:05:02.964 "code": -32601, 00:05:02.964 "message": "Method not found" 00:05:02.964 } 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:02.964 07:05:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1043194 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 1043194 ']' 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 1043194 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1043194 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1043194' 00:05:02.964 killing process with pid 1043194 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@971 -- # kill 1043194 00:05:02.964 07:05:37 app_cmdline -- common/autotest_common.sh@976 -- # wait 1043194 00:05:03.225 00:05:03.225 real 0m1.670s 00:05:03.225 user 0m1.994s 00:05:03.225 sys 0m0.425s 00:05:03.225 07:05:37 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.225 07:05:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:03.225 ************************************ 00:05:03.225 END TEST app_cmdline 00:05:03.225 ************************************ 00:05:03.225 07:05:37 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:03.225 07:05:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.225 07:05:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.225 07:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.225 ************************************ 00:05:03.225 START TEST version 00:05:03.225 ************************************ 00:05:03.225 07:05:37 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:03.225 * Looking for test storage... 00:05:03.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:03.225 07:05:37 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.225 07:05:37 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.225 07:05:37 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.487 07:05:38 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.487 07:05:38 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.487 07:05:38 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.487 07:05:38 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.487 07:05:38 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.488 07:05:38 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.488 07:05:38 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.488 07:05:38 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.488 07:05:38 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.488 07:05:38 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.488 07:05:38 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.488 07:05:38 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.488 07:05:38 version -- scripts/common.sh@344 -- # case "$op" in 00:05:03.488 07:05:38 version -- scripts/common.sh@345 -- # : 1 00:05:03.488 07:05:38 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.488 07:05:38 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.488 07:05:38 version -- scripts/common.sh@365 -- # decimal 1 00:05:03.488 07:05:38 version -- scripts/common.sh@353 -- # local d=1 00:05:03.488 07:05:38 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.488 07:05:38 version -- scripts/common.sh@355 -- # echo 1 00:05:03.488 07:05:38 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.488 07:05:38 version -- scripts/common.sh@366 -- # decimal 2 00:05:03.488 07:05:38 version -- scripts/common.sh@353 -- # local d=2 00:05:03.488 07:05:38 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.488 07:05:38 version -- scripts/common.sh@355 -- # echo 2 00:05:03.488 07:05:38 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.488 07:05:38 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.488 07:05:38 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.488 07:05:38 version -- scripts/common.sh@368 -- # return 0 00:05:03.488 07:05:38 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.488 07:05:38 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.488 --rc genhtml_branch_coverage=1 00:05:03.488 --rc genhtml_function_coverage=1 00:05:03.488 --rc genhtml_legend=1 00:05:03.488 --rc geninfo_all_blocks=1 00:05:03.488 --rc geninfo_unexecuted_blocks=1 00:05:03.488 00:05:03.488 ' 00:05:03.488 07:05:38 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.488 --rc genhtml_branch_coverage=1 00:05:03.488 --rc genhtml_function_coverage=1 00:05:03.488 --rc genhtml_legend=1 00:05:03.488 --rc geninfo_all_blocks=1 00:05:03.488 --rc geninfo_unexecuted_blocks=1 00:05:03.488 00:05:03.488 ' 00:05:03.488 07:05:38 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.488 --rc genhtml_branch_coverage=1 00:05:03.488 --rc genhtml_function_coverage=1 00:05:03.488 --rc genhtml_legend=1 00:05:03.488 --rc geninfo_all_blocks=1 00:05:03.488 --rc geninfo_unexecuted_blocks=1 00:05:03.488 00:05:03.488 ' 00:05:03.488 07:05:38 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.488 --rc genhtml_branch_coverage=1 00:05:03.488 --rc genhtml_function_coverage=1 00:05:03.488 --rc genhtml_legend=1 00:05:03.488 --rc geninfo_all_blocks=1 00:05:03.488 --rc geninfo_unexecuted_blocks=1 00:05:03.488 00:05:03.488 ' 00:05:03.488 07:05:38 version -- app/version.sh@17 -- # get_header_version major 00:05:03.488 07:05:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:03.488 07:05:38 version -- app/version.sh@14 -- # cut -f2 00:05:03.488 07:05:38 version -- app/version.sh@14 -- # tr -d '"' 00:05:03.488 07:05:38 version -- app/version.sh@17 -- # major=25 00:05:03.488 07:05:38 version -- app/version.sh@18 -- # get_header_version minor 00:05:03.488 07:05:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:03.488 07:05:38 version -- app/version.sh@14 -- # cut -f2 00:05:03.488 07:05:38 version -- app/version.sh@14 -- # tr -d '"' 00:05:03.488 07:05:38 version -- app/version.sh@18 -- # minor=1 00:05:03.488 07:05:38 version -- app/version.sh@19 -- # get_header_version patch 00:05:03.488 07:05:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:03.488 07:05:38 version -- app/version.sh@14 -- # cut -f2 00:05:03.488 07:05:38 version -- app/version.sh@14 -- # tr -d '"' 00:05:03.488 07:05:38 version -- app/version.sh@19 -- # patch=0 00:05:03.488 07:05:38 version -- app/version.sh@20 -- # get_header_version suffix 00:05:03.488 07:05:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:03.488 07:05:38 version -- app/version.sh@14 -- # cut -f2 00:05:03.488 07:05:38 version -- app/version.sh@14 -- # tr -d '"' 00:05:03.488 07:05:38 version -- app/version.sh@20 -- # suffix=-pre 00:05:03.488 07:05:38 version -- app/version.sh@22 -- # version=25.1 00:05:03.488 07:05:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:03.488 07:05:38 version -- app/version.sh@28 -- # version=25.1rc0 00:05:03.488 07:05:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:03.488 07:05:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:03.488 07:05:38 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:03.488 07:05:38 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:03.488 00:05:03.488 real 0m0.276s 00:05:03.488 user 0m0.183s 00:05:03.488 sys 0m0.138s 00:05:03.488 07:05:38 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.488 07:05:38 version -- common/autotest_common.sh@10 -- # set +x 00:05:03.488 ************************************ 00:05:03.488 END TEST version 00:05:03.488 ************************************ 00:05:03.488 07:05:38 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:03.488 07:05:38 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:03.488 07:05:38 -- spdk/autotest.sh@194 -- # uname -s 00:05:03.488 07:05:38 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:03.488 07:05:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:03.488 07:05:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:03.488 07:05:38 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:03.488 07:05:38 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:03.488 07:05:38 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:03.488 07:05:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:03.488 07:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:03.488 07:05:38 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:03.488 07:05:38 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:03.488 07:05:38 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:03.488 07:05:38 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:03.488 07:05:38 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:03.488 07:05:38 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:03.488 07:05:38 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:03.488 07:05:38 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:03.488 07:05:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.488 07:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:03.749 ************************************ 00:05:03.749 START TEST nvmf_tcp 00:05:03.749 ************************************ 00:05:03.749 07:05:38 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:03.749 * Looking for test storage... 00:05:03.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:03.749 07:05:38 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.749 07:05:38 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.749 07:05:38 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.749 07:05:38 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.749 07:05:38 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:03.749 07:05:38 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.749 07:05:38 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.749 --rc genhtml_branch_coverage=1 00:05:03.749 --rc genhtml_function_coverage=1 00:05:03.749 --rc genhtml_legend=1 00:05:03.749 --rc geninfo_all_blocks=1 00:05:03.749 --rc geninfo_unexecuted_blocks=1 00:05:03.749 00:05:03.749 ' 00:05:03.749 07:05:38 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.749 --rc genhtml_branch_coverage=1 00:05:03.749 --rc genhtml_function_coverage=1 00:05:03.749 --rc genhtml_legend=1 00:05:03.749 --rc geninfo_all_blocks=1 00:05:03.749 --rc geninfo_unexecuted_blocks=1 00:05:03.749 00:05:03.749 ' 00:05:03.749 07:05:38 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.750 --rc genhtml_branch_coverage=1 00:05:03.750 --rc genhtml_function_coverage=1 00:05:03.750 --rc genhtml_legend=1 00:05:03.750 --rc geninfo_all_blocks=1 00:05:03.750 --rc geninfo_unexecuted_blocks=1 00:05:03.750 00:05:03.750 ' 00:05:03.750 07:05:38 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.750 --rc genhtml_branch_coverage=1 00:05:03.750 --rc genhtml_function_coverage=1 00:05:03.750 --rc genhtml_legend=1 00:05:03.750 --rc geninfo_all_blocks=1 00:05:03.750 --rc geninfo_unexecuted_blocks=1 00:05:03.750 00:05:03.750 ' 00:05:03.750 07:05:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:03.750 07:05:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:03.750 07:05:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:03.750 07:05:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:03.750 07:05:38 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.750 07:05:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.750 ************************************ 00:05:03.750 START TEST nvmf_target_core 00:05:03.750 ************************************ 00:05:03.750 07:05:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:04.012 * Looking for test storage... 00:05:04.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:04.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.012 --rc genhtml_branch_coverage=1 00:05:04.012 --rc genhtml_function_coverage=1 00:05:04.012 --rc genhtml_legend=1 00:05:04.012 --rc geninfo_all_blocks=1 00:05:04.012 --rc geninfo_unexecuted_blocks=1 00:05:04.012 00:05:04.012 ' 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:04.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.012 --rc genhtml_branch_coverage=1 00:05:04.012 --rc genhtml_function_coverage=1 00:05:04.012 --rc genhtml_legend=1 00:05:04.012 --rc geninfo_all_blocks=1 00:05:04.012 --rc geninfo_unexecuted_blocks=1 00:05:04.012 00:05:04.012 ' 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:04.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.012 --rc genhtml_branch_coverage=1 00:05:04.012 --rc genhtml_function_coverage=1 00:05:04.012 --rc genhtml_legend=1 00:05:04.012 --rc geninfo_all_blocks=1 00:05:04.012 --rc geninfo_unexecuted_blocks=1 00:05:04.012 00:05:04.012 ' 00:05:04.012 07:05:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:04.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.012 --rc genhtml_branch_coverage=1 00:05:04.012 --rc genhtml_function_coverage=1 00:05:04.012 --rc genhtml_legend=1 00:05:04.013 --rc geninfo_all_blocks=1 00:05:04.013 --rc geninfo_unexecuted_blocks=1 00:05:04.013 00:05:04.013 ' 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:04.013 ************************************ 00:05:04.013 START TEST nvmf_abort 00:05:04.013 ************************************ 00:05:04.013 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:04.276 * Looking for test storage... 00:05:04.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:04.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.276 --rc genhtml_branch_coverage=1 00:05:04.276 --rc genhtml_function_coverage=1 00:05:04.276 --rc genhtml_legend=1 00:05:04.276 --rc geninfo_all_blocks=1 00:05:04.276 --rc geninfo_unexecuted_blocks=1 00:05:04.276 00:05:04.276 ' 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:04.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.276 --rc genhtml_branch_coverage=1 00:05:04.276 --rc genhtml_function_coverage=1 00:05:04.276 --rc genhtml_legend=1 00:05:04.276 --rc geninfo_all_blocks=1 00:05:04.276 --rc geninfo_unexecuted_blocks=1 00:05:04.276 00:05:04.276 ' 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:04.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.276 --rc genhtml_branch_coverage=1 00:05:04.276 --rc genhtml_function_coverage=1 00:05:04.276 --rc genhtml_legend=1 00:05:04.276 --rc geninfo_all_blocks=1 00:05:04.276 --rc geninfo_unexecuted_blocks=1 00:05:04.276 00:05:04.276 ' 00:05:04.276 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:04.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.276 --rc genhtml_branch_coverage=1 00:05:04.277 --rc genhtml_function_coverage=1 00:05:04.277 --rc genhtml_legend=1 00:05:04.277 --rc geninfo_all_blocks=1 00:05:04.277 --rc geninfo_unexecuted_blocks=1 00:05:04.277 00:05:04.277 ' 00:05:04.277 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:04.277 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:04.277 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.277 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.277 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.277 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.277 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.277 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.277 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.277 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.277 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.277 07:05:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:04.277 07:05:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:12.444 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:12.444 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:12.444 Found net devices under 0000:31:00.0: cvl_0_0 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:12.444 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:12.445 Found net devices under 0000:31:00.1: cvl_0_1 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:12.445 07:05:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:12.445 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:12.445 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:12.445 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:12.445 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:12.445 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:12.445 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:12.706 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:12.706 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:12.706 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:12.706 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:12.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:12.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:05:12.706 00:05:12.706 --- 10.0.0.2 ping statistics --- 00:05:12.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.706 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:05:12.706 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:12.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:12.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:05:12.706 00:05:12.706 --- 10.0.0.1 ping statistics --- 00:05:12.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.707 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1048057 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1048057 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 1048057 ']' 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:12.707 07:05:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.707 [2024-11-20 07:05:47.376531] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:05:12.707 [2024-11-20 07:05:47.376594] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:12.968 [2024-11-20 07:05:47.488810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.968 [2024-11-20 07:05:47.543286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:12.968 [2024-11-20 07:05:47.543343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:12.968 [2024-11-20 07:05:47.543353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:12.968 [2024-11-20 07:05:47.543360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:12.968 [2024-11-20 07:05:47.543366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:12.968 [2024-11-20 07:05:47.545510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.968 [2024-11-20 07:05:47.545678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.968 [2024-11-20 07:05:47.545678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.540 [2024-11-20 07:05:48.238853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.540 Malloc0 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.540 Delay0 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.540 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.800 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.800 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:13.800 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.800 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.800 [2024-11-20 07:05:48.321054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:13.800 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.800 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:13.800 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.800 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.800 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.800 07:05:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:13.800 [2024-11-20 07:05:48.450366] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:16.343 Initializing NVMe Controllers 00:05:16.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:16.343 controller IO queue size 128 less than required 00:05:16.343 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:16.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:16.343 Initialization complete. Launching workers. 00:05:16.343 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 28408 00:05:16.343 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28472, failed to submit 62 00:05:16.343 success 28412, unsuccessful 60, failed 0 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:16.343 rmmod nvme_tcp 00:05:16.343 rmmod nvme_fabrics 00:05:16.343 rmmod nvme_keyring 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1048057 ']' 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1048057 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 1048057 ']' 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 1048057 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1048057 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1048057' 00:05:16.343 killing process with pid 1048057 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 1048057 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 1048057 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:16.343 07:05:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:18.259 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:18.259 00:05:18.259 real 0m14.238s 00:05:18.259 user 0m14.532s 00:05:18.259 sys 0m7.123s 00:05:18.259 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.259 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.259 ************************************ 00:05:18.259 END TEST nvmf_abort 00:05:18.259 ************************************ 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:18.521 ************************************ 00:05:18.521 START TEST nvmf_ns_hotplug_stress 00:05:18.521 ************************************ 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:18.521 * Looking for test storage... 00:05:18.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.521 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:18.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.784 --rc genhtml_branch_coverage=1 00:05:18.784 --rc genhtml_function_coverage=1 00:05:18.784 --rc genhtml_legend=1 00:05:18.784 --rc geninfo_all_blocks=1 00:05:18.784 --rc geninfo_unexecuted_blocks=1 00:05:18.784 00:05:18.784 ' 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:18.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.784 --rc genhtml_branch_coverage=1 00:05:18.784 --rc genhtml_function_coverage=1 00:05:18.784 --rc genhtml_legend=1 00:05:18.784 --rc geninfo_all_blocks=1 00:05:18.784 --rc geninfo_unexecuted_blocks=1 00:05:18.784 00:05:18.784 ' 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:18.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.784 --rc genhtml_branch_coverage=1 00:05:18.784 --rc genhtml_function_coverage=1 00:05:18.784 --rc genhtml_legend=1 00:05:18.784 --rc geninfo_all_blocks=1 00:05:18.784 --rc geninfo_unexecuted_blocks=1 00:05:18.784 00:05:18.784 ' 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:18.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.784 --rc genhtml_branch_coverage=1 00:05:18.784 --rc genhtml_function_coverage=1 00:05:18.784 --rc genhtml_legend=1 00:05:18.784 --rc geninfo_all_blocks=1 00:05:18.784 --rc geninfo_unexecuted_blocks=1 00:05:18.784 00:05:18.784 ' 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.784 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:18.785 07:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:26.940 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:26.940 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:26.940 Found net devices under 0000:31:00.0: cvl_0_0 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:26.940 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:26.941 Found net devices under 0000:31:00.1: cvl_0_1 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:26.941 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:27.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:27.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:05:27.202 00:05:27.202 --- 10.0.0.2 ping statistics --- 00:05:27.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:27.202 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:27.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:27.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:05:27.202 00:05:27.202 --- 10.0.0.1 ping statistics --- 00:05:27.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:27.202 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1053843 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1053843 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 1053843 ']' 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:27.202 07:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:27.202 [2024-11-20 07:06:01.849500] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:05:27.202 [2024-11-20 07:06:01.849566] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:27.202 [2024-11-20 07:06:01.960791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.464 [2024-11-20 07:06:02.011475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:27.464 [2024-11-20 07:06:02.011536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:27.464 [2024-11-20 07:06:02.011545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:27.464 [2024-11-20 07:06:02.011552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:27.464 [2024-11-20 07:06:02.011559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:27.464 [2024-11-20 07:06:02.013657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.464 [2024-11-20 07:06:02.013794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.464 [2024-11-20 07:06:02.013794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.035 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:28.035 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:28.035 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:28.035 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:28.035 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.035 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:28.035 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:28.035 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:28.296 [2024-11-20 07:06:02.862828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.296 07:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:28.557 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:28.557 [2024-11-20 07:06:03.232369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:28.557 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:28.818 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:29.079 Malloc0 00:05:29.079 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:29.079 Delay0 00:05:29.079 07:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.340 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:29.601 NULL1 00:05:29.601 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:29.601 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1054252 00:05:29.601 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:29.601 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:29.601 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.863 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.124 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:30.124 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:30.386 true 00:05:30.386 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:30.386 07:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.386 07:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.647 07:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:30.647 07:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:30.908 true 00:05:30.908 07:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:30.908 07:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.169 07:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.170 07:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:31.170 07:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:31.431 true 00:05:31.431 07:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:31.431 07:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.692 07:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.692 07:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:31.692 07:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:31.953 true 00:05:31.953 07:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:31.953 07:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.214 07:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.214 07:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:32.214 07:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:32.474 true 00:05:32.474 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:32.474 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.735 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.735 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:32.735 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:32.995 true 00:05:32.995 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:32.995 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.257 07:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.518 07:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:33.518 07:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:33.518 true 00:05:33.518 07:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:33.518 07:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.779 07:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.040 07:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:34.040 07:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:34.040 true 00:05:34.040 07:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:34.040 07:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.300 07:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.561 07:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:34.561 07:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:34.561 true 00:05:34.561 07:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:34.823 07:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.823 07:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.084 07:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:35.084 07:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:35.346 true 00:05:35.346 07:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:35.346 07:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.346 07:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.608 07:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:35.608 07:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:35.868 true 00:05:35.868 07:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:35.868 07:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.868 07:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.129 07:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:36.129 07:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:36.390 true 00:05:36.390 07:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:36.390 07:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.651 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.651 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:36.651 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:36.912 true 00:05:36.912 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:36.912 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.174 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.174 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:37.174 07:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:37.436 true 00:05:37.436 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:37.436 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.697 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.958 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:37.958 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:37.958 true 00:05:37.958 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:37.958 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.218 07:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.478 07:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:38.478 07:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:38.478 true 00:05:38.478 07:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:38.478 07:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.738 07:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.998 07:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:38.998 07:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:38.998 true 00:05:38.998 07:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:38.998 07:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.259 07:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.519 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:39.519 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:39.519 true 00:05:39.519 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:39.519 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.780 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.040 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:40.040 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:40.300 true 00:05:40.300 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:40.300 07:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.300 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.560 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:40.560 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:40.820 true 00:05:40.820 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:40.820 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.820 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.080 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:41.080 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:41.340 true 00:05:41.340 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:41.340 07:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.600 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.600 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:41.600 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:41.860 true 00:05:41.860 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:41.860 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.122 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.122 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:42.122 07:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:42.383 true 00:05:42.383 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:42.383 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.643 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.643 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:42.643 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:42.903 true 00:05:42.903 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:42.903 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.164 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.425 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:43.425 07:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:43.425 true 00:05:43.425 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:43.425 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.686 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.946 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:43.946 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:43.946 true 00:05:43.946 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:43.946 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.206 07:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.467 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:44.467 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:44.467 true 00:05:44.727 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:44.727 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.727 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.987 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:44.987 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:45.272 true 00:05:45.272 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:45.272 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.272 07:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.606 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:45.606 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:45.606 true 00:05:45.606 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:45.606 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.920 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.181 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:46.181 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:46.181 true 00:05:46.181 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:46.181 07:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.442 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.704 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:46.704 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:46.704 true 00:05:46.704 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:46.704 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.965 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.227 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:47.227 07:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:47.227 true 00:05:47.488 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:47.488 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.488 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.749 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:47.749 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:48.009 true 00:05:48.009 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:48.009 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.009 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.270 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:48.270 07:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:48.531 true 00:05:48.531 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:48.531 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.793 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.793 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:48.793 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:49.053 true 00:05:49.053 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:49.053 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.314 07:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.314 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:49.314 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:49.574 true 00:05:49.575 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:49.575 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.842 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.842 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:49.842 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:50.102 true 00:05:50.102 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:50.102 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.361 07:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.620 07:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:50.620 07:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:50.620 true 00:05:50.620 07:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:50.620 07:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.878 07:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.138 07:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:51.138 07:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:51.138 true 00:05:51.138 07:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:51.138 07:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.398 07:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.657 07:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:51.657 07:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:51.917 true 00:05:51.918 07:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:51.918 07:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.918 07:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.178 07:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:52.178 07:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:52.439 true 00:05:52.439 07:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:52.439 07:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.699 07:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.699 07:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:52.699 07:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:52.957 true 00:05:52.957 07:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:52.957 07:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.217 07:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.217 07:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:53.217 07:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:53.477 true 00:05:53.477 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:53.477 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.737 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.996 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:53.996 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:53.996 true 00:05:53.996 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:53.996 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.255 07:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.513 07:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:54.513 07:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:54.513 true 00:05:54.513 07:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:54.514 07:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.772 07:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.032 07:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:55.032 07:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:55.032 true 00:05:55.032 07:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:55.032 07:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.292 07:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.553 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:55.553 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:55.553 true 00:05:55.814 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:55.814 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.814 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.074 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:56.074 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:56.074 true 00:05:56.334 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:56.334 07:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.334 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.594 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:05:56.594 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:05:56.855 true 00:05:56.855 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:56.855 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.855 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.116 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:05:57.116 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:05:57.377 true 00:05:57.377 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:57.377 07:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.638 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.638 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:05:57.638 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:05:57.898 true 00:05:57.898 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:57.898 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.158 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.158 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:05:58.158 07:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:05:58.418 true 00:05:58.418 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:58.418 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.679 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.940 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:05:58.940 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:05:58.940 true 00:05:58.940 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:58.940 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.200 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.461 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:05:59.461 07:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:05:59.461 true 00:05:59.461 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:59.461 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.721 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.980 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:05:59.980 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:05:59.980 Initializing NVMe Controllers 00:05:59.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:59.980 Controller IO queue size 128, less than required. 00:05:59.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:59.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:59.980 Initialization complete. Launching workers. 00:05:59.980 ======================================================== 00:05:59.980 Latency(us) 00:05:59.980 Device Information : IOPS MiB/s Average min max 00:05:59.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30360.83 14.82 4215.74 1350.24 45495.47 00:05:59.980 ======================================================== 00:05:59.980 Total : 30360.83 14.82 4215.74 1350.24 45495.47 00:05:59.980 00:05:59.980 true 00:05:59.980 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1054252 00:05:59.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1054252) - No such process 00:05:59.980 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1054252 00:05:59.981 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.241 07:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.501 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:00.501 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:00.501 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:00.501 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.501 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:00.501 null0 00:06:00.501 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.501 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.501 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:00.762 null1 00:06:00.762 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.762 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.762 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:01.022 null2 00:06:01.022 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.022 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.022 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:01.022 null3 00:06:01.282 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.282 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.282 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:01.282 null4 00:06:01.282 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.282 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.282 07:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:01.543 null5 00:06:01.543 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.543 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.543 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:01.804 null6 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:01.804 null7 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1061251 1061252 1061255 1061257 1061259 1061261 1061263 1061265 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.804 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.066 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.066 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.066 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.066 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.066 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.066 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.066 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.066 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.327 07:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.327 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.327 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.589 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.850 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.110 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.110 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.110 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.110 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.111 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.373 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.373 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.373 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.373 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.373 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.373 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.373 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.373 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.373 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.373 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.373 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.373 07:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.373 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.635 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.897 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.160 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.421 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.421 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.421 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.421 07:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.421 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.682 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.942 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.942 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.942 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.942 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.942 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.942 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.942 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.942 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.943 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.943 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.943 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.943 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.943 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.943 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.943 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.943 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.943 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.943 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.204 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.465 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.465 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.465 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.465 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.465 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.465 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.465 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.465 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.465 07:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.465 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.465 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.465 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.465 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.465 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.465 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.465 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.465 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.465 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.465 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.466 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.466 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.466 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.466 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.466 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.466 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.726 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.726 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.726 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.726 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.726 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.726 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.726 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.726 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.726 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:05.991 rmmod nvme_tcp 00:06:05.991 rmmod nvme_fabrics 00:06:05.991 rmmod nvme_keyring 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1053843 ']' 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1053843 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 1053843 ']' 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 1053843 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1053843 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1053843' 00:06:05.991 killing process with pid 1053843 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 1053843 00:06:05.991 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 1053843 00:06:06.253 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:06.253 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:06.253 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:06.253 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:06.253 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:06.253 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:06.253 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:06.253 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:06.253 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:06.253 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.253 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.253 07:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.169 07:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:08.169 00:06:08.169 real 0m49.755s 00:06:08.169 user 3m19.986s 00:06:08.169 sys 0m17.799s 00:06:08.169 07:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.169 07:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:08.169 ************************************ 00:06:08.169 END TEST nvmf_ns_hotplug_stress 00:06:08.169 ************************************ 00:06:08.169 07:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:08.169 07:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:08.169 07:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.169 07:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:08.169 ************************************ 00:06:08.169 START TEST nvmf_delete_subsystem 00:06:08.169 ************************************ 00:06:08.169 07:06:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:08.431 * Looking for test storage... 00:06:08.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.431 --rc genhtml_branch_coverage=1 00:06:08.431 --rc genhtml_function_coverage=1 00:06:08.431 --rc genhtml_legend=1 00:06:08.431 --rc geninfo_all_blocks=1 00:06:08.431 --rc geninfo_unexecuted_blocks=1 00:06:08.431 00:06:08.431 ' 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.431 --rc genhtml_branch_coverage=1 00:06:08.431 --rc genhtml_function_coverage=1 00:06:08.431 --rc genhtml_legend=1 00:06:08.431 --rc geninfo_all_blocks=1 00:06:08.431 --rc geninfo_unexecuted_blocks=1 00:06:08.431 00:06:08.431 ' 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.431 --rc genhtml_branch_coverage=1 00:06:08.431 --rc genhtml_function_coverage=1 00:06:08.431 --rc genhtml_legend=1 00:06:08.431 --rc geninfo_all_blocks=1 00:06:08.431 --rc geninfo_unexecuted_blocks=1 00:06:08.431 00:06:08.431 ' 00:06:08.431 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.431 --rc genhtml_branch_coverage=1 00:06:08.431 --rc genhtml_function_coverage=1 00:06:08.431 --rc genhtml_legend=1 00:06:08.431 --rc geninfo_all_blocks=1 00:06:08.431 --rc geninfo_unexecuted_blocks=1 00:06:08.431 00:06:08.432 ' 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:08.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:08.432 07:06:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:16.571 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:16.571 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.571 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:16.571 Found net devices under 0000:31:00.0: cvl_0_0 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:16.572 Found net devices under 0000:31:00.1: cvl_0_1 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.572 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:16.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:16.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:06:16.833 00:06:16.833 --- 10.0.0.2 ping statistics --- 00:06:16.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.833 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:16.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:16.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:06:16.833 00:06:16.833 --- 10.0.0.1 ping statistics --- 00:06:16.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.833 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1067109 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1067109 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 1067109 ']' 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:16.833 07:06:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.094 [2024-11-20 07:06:51.604242] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:06:17.094 [2024-11-20 07:06:51.604310] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.094 [2024-11-20 07:06:51.695070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.094 [2024-11-20 07:06:51.735593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.094 [2024-11-20 07:06:51.735632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.094 [2024-11-20 07:06:51.735640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.094 [2024-11-20 07:06:51.735647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.094 [2024-11-20 07:06:51.735652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.094 [2024-11-20 07:06:51.736913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.094 [2024-11-20 07:06:51.736916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.665 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:17.665 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:17.665 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:17.665 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.665 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.926 [2024-11-20 07:06:52.444096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.926 [2024-11-20 07:06:52.468291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.926 NULL1 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.926 Delay0 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1067459 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:17.926 07:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:17.926 [2024-11-20 07:06:52.564986] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:19.836 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:19.836 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.836 07:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 starting I/O failed: -6 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 starting I/O failed: -6 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 starting I/O failed: -6 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 starting I/O failed: -6 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 starting I/O failed: -6 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 starting I/O failed: -6 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 starting I/O failed: -6 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 starting I/O failed: -6 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 starting I/O failed: -6 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 starting I/O failed: -6 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 starting I/O failed: -6 00:06:20.098 [2024-11-20 07:06:54.728002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b37f00 is same with the state(6) to be set 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Write completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.098 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 starting I/O failed: -6 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 starting I/O failed: -6 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 starting I/O failed: -6 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 starting I/O failed: -6 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 starting I/O failed: -6 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 starting I/O failed: -6 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 starting I/O failed: -6 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 starting I/O failed: -6 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 starting I/O failed: -6 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 starting I/O failed: -6 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 starting I/O failed: -6 00:06:20.099 [2024-11-20 07:06:54.733447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff32400d020 is same with the state(6) to be set 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Write completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:20.099 Read completed with error (sct=0, sc=8) 00:06:21.039 [2024-11-20 07:06:55.704029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b395e0 is same with the state(6) to be set 00:06:21.039 Read completed with error (sct=0, sc=8) 00:06:21.039 Write completed with error (sct=0, sc=8) 00:06:21.039 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 [2024-11-20 07:06:55.731215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b380e0 is same with the state(6) to be set 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 [2024-11-20 07:06:55.731549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b384a0 is same with the state(6) to be set 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 [2024-11-20 07:06:55.735727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff32400d350 is same with the state(6) to be set 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Write completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 Read completed with error (sct=0, sc=8) 00:06:21.040 [2024-11-20 07:06:55.736070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff324000c40 is same with the state(6) to be set 00:06:21.040 Initializing NVMe Controllers 00:06:21.040 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:21.040 Controller IO queue size 128, less than required. 00:06:21.040 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:21.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:21.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:21.040 Initialization complete. Launching workers. 00:06:21.040 ======================================================== 00:06:21.040 Latency(us) 00:06:21.040 Device Information : IOPS MiB/s Average min max 00:06:21.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.45 0.08 905737.13 238.99 1005478.51 00:06:21.040 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.95 0.08 907616.70 335.44 1009588.87 00:06:21.040 ======================================================== 00:06:21.040 Total : 329.40 0.16 906678.34 238.99 1009588.87 00:06:21.040 00:06:21.040 [2024-11-20 07:06:55.736483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b395e0 (9): Bad file descriptor 00:06:21.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:21.040 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.040 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:21.040 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1067459 00:06:21.040 07:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1067459 00:06:21.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1067459) - No such process 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1067459 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1067459 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1067459 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.610 [2024-11-20 07:06:56.269152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1068148 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1068148 00:06:21.610 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.610 [2024-11-20 07:06:56.346000] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:22.181 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.181 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1068148 00:06:22.181 07:06:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.753 07:06:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.753 07:06:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1068148 00:06:22.753 07:06:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.322 07:06:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.322 07:06:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1068148 00:06:23.322 07:06:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.582 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.582 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1068148 00:06:23.582 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.150 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:24.150 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1068148 00:06:24.150 07:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.720 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:24.720 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1068148 00:06:24.720 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.980 Initializing NVMe Controllers 00:06:24.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:24.980 Controller IO queue size 128, less than required. 00:06:24.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:24.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:24.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:24.980 Initialization complete. Launching workers. 00:06:24.980 ======================================================== 00:06:24.980 Latency(us) 00:06:24.980 Device Information : IOPS MiB/s Average min max 00:06:24.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002838.27 1000270.74 1045259.53 00:06:24.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003006.76 1000279.49 1041575.61 00:06:24.980 ======================================================== 00:06:24.980 Total : 256.00 0.12 1002922.51 1000270.74 1045259.53 00:06:24.980 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1068148 00:06:25.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1068148) - No such process 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1068148 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:25.241 rmmod nvme_tcp 00:06:25.241 rmmod nvme_fabrics 00:06:25.241 rmmod nvme_keyring 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1067109 ']' 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1067109 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 1067109 ']' 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 1067109 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1067109 00:06:25.241 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:25.242 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:25.242 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1067109' 00:06:25.242 killing process with pid 1067109 00:06:25.242 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 1067109 00:06:25.242 07:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 1067109 00:06:25.503 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:25.503 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:25.503 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:25.503 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:25.503 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:25.503 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:25.503 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:25.503 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:25.503 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:25.503 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.503 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.503 07:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.417 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:27.417 00:06:27.417 real 0m19.229s 00:06:27.417 user 0m31.103s 00:06:27.417 sys 0m7.461s 00:06:27.417 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.417 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.417 ************************************ 00:06:27.417 END TEST nvmf_delete_subsystem 00:06:27.417 ************************************ 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:27.679 ************************************ 00:06:27.679 START TEST nvmf_host_management 00:06:27.679 ************************************ 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:27.679 * Looking for test storage... 00:06:27.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:27.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.679 --rc genhtml_branch_coverage=1 00:06:27.679 --rc genhtml_function_coverage=1 00:06:27.679 --rc genhtml_legend=1 00:06:27.679 --rc geninfo_all_blocks=1 00:06:27.679 --rc geninfo_unexecuted_blocks=1 00:06:27.679 00:06:27.679 ' 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:27.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.679 --rc genhtml_branch_coverage=1 00:06:27.679 --rc genhtml_function_coverage=1 00:06:27.679 --rc genhtml_legend=1 00:06:27.679 --rc geninfo_all_blocks=1 00:06:27.679 --rc geninfo_unexecuted_blocks=1 00:06:27.679 00:06:27.679 ' 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:27.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.679 --rc genhtml_branch_coverage=1 00:06:27.679 --rc genhtml_function_coverage=1 00:06:27.679 --rc genhtml_legend=1 00:06:27.679 --rc geninfo_all_blocks=1 00:06:27.679 --rc geninfo_unexecuted_blocks=1 00:06:27.679 00:06:27.679 ' 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:27.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.679 --rc genhtml_branch_coverage=1 00:06:27.679 --rc genhtml_function_coverage=1 00:06:27.679 --rc genhtml_legend=1 00:06:27.679 --rc geninfo_all_blocks=1 00:06:27.679 --rc geninfo_unexecuted_blocks=1 00:06:27.679 00:06:27.679 ' 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.679 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.942 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:27.943 07:07:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:36.263 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:36.263 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:36.263 Found net devices under 0000:31:00.0: cvl_0_0 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:36.263 Found net devices under 0000:31:00.1: cvl_0_1 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:36.263 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:36.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:36.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:06:36.264 00:06:36.264 --- 10.0.0.2 ping statistics --- 00:06:36.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.264 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:36.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:36.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:06:36.264 00:06:36.264 --- 10.0.0.1 ping statistics --- 00:06:36.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.264 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1073657 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1073657 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1073657 ']' 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:36.264 07:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.264 [2024-11-20 07:07:10.992806] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:06:36.264 [2024-11-20 07:07:10.992889] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.524 [2024-11-20 07:07:11.103207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.524 [2024-11-20 07:07:11.157151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:36.524 [2024-11-20 07:07:11.157210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:36.524 [2024-11-20 07:07:11.157219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.524 [2024-11-20 07:07:11.157226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.524 [2024-11-20 07:07:11.157233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:36.524 [2024-11-20 07:07:11.159619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.524 [2024-11-20 07:07:11.159783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.524 [2024-11-20 07:07:11.159934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:36.524 [2024-11-20 07:07:11.159936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.093 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:37.093 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:37.093 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:37.093 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.093 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.093 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.093 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:37.093 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.093 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.093 [2024-11-20 07:07:11.852652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.354 Malloc0 00:06:37.354 [2024-11-20 07:07:11.931221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1073905 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1073905 /var/tmp/bdevperf.sock 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1073905 ']' 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:37.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:37.354 { 00:06:37.354 "params": { 00:06:37.354 "name": "Nvme$subsystem", 00:06:37.354 "trtype": "$TEST_TRANSPORT", 00:06:37.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:37.354 "adrfam": "ipv4", 00:06:37.354 "trsvcid": "$NVMF_PORT", 00:06:37.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:37.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:37.354 "hdgst": ${hdgst:-false}, 00:06:37.354 "ddgst": ${ddgst:-false} 00:06:37.354 }, 00:06:37.354 "method": "bdev_nvme_attach_controller" 00:06:37.354 } 00:06:37.354 EOF 00:06:37.354 )") 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:37.354 07:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:37.354 "params": { 00:06:37.354 "name": "Nvme0", 00:06:37.354 "trtype": "tcp", 00:06:37.354 "traddr": "10.0.0.2", 00:06:37.354 "adrfam": "ipv4", 00:06:37.354 "trsvcid": "4420", 00:06:37.354 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:37.354 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:37.354 "hdgst": false, 00:06:37.354 "ddgst": false 00:06:37.354 }, 00:06:37.354 "method": "bdev_nvme_attach_controller" 00:06:37.354 }' 00:06:37.354 [2024-11-20 07:07:12.035629] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:06:37.354 [2024-11-20 07:07:12.035680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1073905 ] 00:06:37.354 [2024-11-20 07:07:12.113506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.614 [2024-11-20 07:07:12.150067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.874 Running I/O for 10 seconds... 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=678 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 678 -ge 100 ']' 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:38.134 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:38.395 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.395 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.395 [2024-11-20 07:07:12.907931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:38.395 [2024-11-20 07:07:12.907976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.395 [2024-11-20 07:07:12.907987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:38.395 [2024-11-20 07:07:12.907996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.395 [2024-11-20 07:07:12.908004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:38.395 [2024-11-20 07:07:12.908012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.395 [2024-11-20 07:07:12.908020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:38.395 [2024-11-20 07:07:12.908027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.395 [2024-11-20 07:07:12.908034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8cb00 is same with the state(6) to be set 00:06:38.395 [2024-11-20 07:07:12.908094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.395 [2024-11-20 07:07:12.908104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.395 [2024-11-20 07:07:12.908118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.395 [2024-11-20 07:07:12.908126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.395 [2024-11-20 07:07:12.908136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.395 [2024-11-20 07:07:12.908143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.395 [2024-11-20 07:07:12.908159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.395 [2024-11-20 07:07:12.908166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.395 [2024-11-20 07:07:12.908176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.395 [2024-11-20 07:07:12.908183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.395 [2024-11-20 07:07:12.908192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.395 [2024-11-20 07:07:12.908200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.395 [2024-11-20 07:07:12.908209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.396 [2024-11-20 07:07:12.908871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.396 [2024-11-20 07:07:12.908880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.908887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.908897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.908904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.908913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.908921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.908930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.908938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.908947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.908954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.908964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.908971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.908981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.908988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.908997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.909005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.909017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.909024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.909033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.909040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.909050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.909058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.909067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.909075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.909084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.909091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.909101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.909108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.909118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.909125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.909134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.909141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.909151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.909158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.909169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.909177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.909186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.397 [2024-11-20 07:07:12.909193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.910427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:38.397 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.397 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:38.397 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.397 task offset: 99712 on job bdev=Nvme0n1 fails 00:06:38.397 00:06:38.397 Latency(us) 00:06:38.397 [2024-11-20T06:07:13.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.397 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:38.397 Job: Nvme0n1 ended in about 0.45 seconds with error 00:06:38.397 Verification LBA range: start 0x0 length 0x400 00:06:38.397 Nvme0n1 : 0.45 1698.04 106.13 141.50 0.00 33771.70 1788.59 33423.36 00:06:38.397 [2024-11-20T06:07:13.164Z] =================================================================================================================== 00:06:38.397 [2024-11-20T06:07:13.164Z] Total : 1698.04 106.13 141.50 0.00 33771.70 1788.59 33423.36 00:06:38.397 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.397 [2024-11-20 07:07:12.912412] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.397 [2024-11-20 07:07:12.912432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8cb00 (9): Bad file descriptor 00:06:38.397 [2024-11-20 07:07:12.914125] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:38.397 [2024-11-20 07:07:12.914236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:38.397 [2024-11-20 07:07:12.914266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.397 [2024-11-20 07:07:12.914284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:38.397 [2024-11-20 07:07:12.914293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:38.397 [2024-11-20 07:07:12.914300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:38.397 [2024-11-20 07:07:12.914307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe8cb00 00:06:38.397 [2024-11-20 07:07:12.914329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8cb00 (9): Bad file descriptor 00:06:38.397 [2024-11-20 07:07:12.914342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:38.397 [2024-11-20 07:07:12.914350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:38.397 [2024-11-20 07:07:12.914359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:38.397 [2024-11-20 07:07:12.914367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:38.397 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.397 07:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:39.337 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1073905 00:06:39.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1073905) - No such process 00:06:39.337 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:39.337 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:39.337 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:39.337 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:39.337 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:39.337 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:39.337 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:39.337 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:39.337 { 00:06:39.337 "params": { 00:06:39.337 "name": "Nvme$subsystem", 00:06:39.337 "trtype": "$TEST_TRANSPORT", 00:06:39.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:39.337 "adrfam": "ipv4", 00:06:39.337 "trsvcid": "$NVMF_PORT", 00:06:39.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:39.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:39.337 "hdgst": ${hdgst:-false}, 00:06:39.337 "ddgst": ${ddgst:-false} 00:06:39.337 }, 00:06:39.337 "method": "bdev_nvme_attach_controller" 00:06:39.337 } 00:06:39.337 EOF 00:06:39.337 )") 00:06:39.337 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:39.337 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:39.337 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:39.337 07:07:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:39.337 "params": { 00:06:39.337 "name": "Nvme0", 00:06:39.337 "trtype": "tcp", 00:06:39.337 "traddr": "10.0.0.2", 00:06:39.337 "adrfam": "ipv4", 00:06:39.337 "trsvcid": "4420", 00:06:39.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:39.337 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:39.337 "hdgst": false, 00:06:39.337 "ddgst": false 00:06:39.337 }, 00:06:39.337 "method": "bdev_nvme_attach_controller" 00:06:39.337 }' 00:06:39.337 [2024-11-20 07:07:13.994117] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:06:39.337 [2024-11-20 07:07:13.994172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1074267 ] 00:06:39.337 [2024-11-20 07:07:14.071888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.597 [2024-11-20 07:07:14.107063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.597 Running I/O for 1 seconds... 00:06:40.536 1597.00 IOPS, 99.81 MiB/s 00:06:40.536 Latency(us) 00:06:40.536 [2024-11-20T06:07:15.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:40.536 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:40.536 Verification LBA range: start 0x0 length 0x400 00:06:40.536 Nvme0n1 : 1.03 1609.64 100.60 0.00 0.00 39084.36 6225.92 32986.45 00:06:40.536 [2024-11-20T06:07:15.303Z] =================================================================================================================== 00:06:40.536 [2024-11-20T06:07:15.303Z] Total : 1609.64 100.60 0.00 0.00 39084.36 6225.92 32986.45 00:06:40.796 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:40.796 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:40.796 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:40.796 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:40.796 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:40.796 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:40.796 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:40.796 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:40.796 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:40.796 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:40.796 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:40.796 rmmod nvme_tcp 00:06:40.796 rmmod nvme_fabrics 00:06:40.796 rmmod nvme_keyring 00:06:40.796 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:40.796 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:40.797 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:40.797 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1073657 ']' 00:06:40.797 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1073657 00:06:40.797 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 1073657 ']' 00:06:40.797 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 1073657 00:06:40.797 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:40.797 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:40.797 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1073657 00:06:40.797 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:40.797 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:40.797 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1073657' 00:06:40.797 killing process with pid 1073657 00:06:40.797 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 1073657 00:06:40.797 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 1073657 00:06:41.056 [2024-11-20 07:07:15.645899] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:41.056 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:41.056 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:41.056 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:41.056 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:41.056 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:41.056 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:41.056 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:41.056 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:41.056 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:41.056 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.056 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.056 07:07:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.598 07:07:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:43.598 07:07:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:43.598 00:06:43.598 real 0m15.505s 00:06:43.598 user 0m22.905s 00:06:43.598 sys 0m7.408s 00:06:43.598 07:07:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.598 07:07:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.598 ************************************ 00:06:43.598 END TEST nvmf_host_management 00:06:43.598 ************************************ 00:06:43.598 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:43.598 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:43.598 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.598 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:43.598 ************************************ 00:06:43.598 START TEST nvmf_lvol 00:06:43.598 ************************************ 00:06:43.598 07:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:43.598 * Looking for test storage... 00:06:43.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.598 07:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:43.598 07:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:43.598 07:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:43.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.598 --rc genhtml_branch_coverage=1 00:06:43.598 --rc genhtml_function_coverage=1 00:06:43.598 --rc genhtml_legend=1 00:06:43.598 --rc geninfo_all_blocks=1 00:06:43.598 --rc geninfo_unexecuted_blocks=1 00:06:43.598 00:06:43.598 ' 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:43.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.598 --rc genhtml_branch_coverage=1 00:06:43.598 --rc genhtml_function_coverage=1 00:06:43.598 --rc genhtml_legend=1 00:06:43.598 --rc geninfo_all_blocks=1 00:06:43.598 --rc geninfo_unexecuted_blocks=1 00:06:43.598 00:06:43.598 ' 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:43.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.598 --rc genhtml_branch_coverage=1 00:06:43.598 --rc genhtml_function_coverage=1 00:06:43.598 --rc genhtml_legend=1 00:06:43.598 --rc geninfo_all_blocks=1 00:06:43.598 --rc geninfo_unexecuted_blocks=1 00:06:43.598 00:06:43.598 ' 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:43.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.598 --rc genhtml_branch_coverage=1 00:06:43.598 --rc genhtml_function_coverage=1 00:06:43.598 --rc genhtml_legend=1 00:06:43.598 --rc geninfo_all_blocks=1 00:06:43.598 --rc geninfo_unexecuted_blocks=1 00:06:43.598 00:06:43.598 ' 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.598 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:43.599 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:51.738 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.738 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:51.738 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:51.738 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:51.738 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:51.738 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:51.738 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:51.739 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:51.739 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:51.739 Found net devices under 0000:31:00.0: cvl_0_0 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:51.739 Found net devices under 0000:31:00.1: cvl_0_1 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.739 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:51.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:06:51.740 00:06:51.740 --- 10.0.0.2 ping statistics --- 00:06:51.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.740 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:06:51.740 00:06:51.740 --- 10.0.0.1 ping statistics --- 00:06:51.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.740 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1079387 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1079387 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 1079387 ']' 00:06:51.740 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.001 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.001 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.001 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.001 07:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:52.001 [2024-11-20 07:07:26.559092] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:06:52.001 [2024-11-20 07:07:26.559156] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.001 [2024-11-20 07:07:26.651090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.001 [2024-11-20 07:07:26.692326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.001 [2024-11-20 07:07:26.692362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.001 [2024-11-20 07:07:26.692371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.001 [2024-11-20 07:07:26.692377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.001 [2024-11-20 07:07:26.692384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.001 [2024-11-20 07:07:26.693930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.001 [2024-11-20 07:07:26.694218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.001 [2024-11-20 07:07:26.694223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.942 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.942 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:06:52.942 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:52.942 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:52.942 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:52.942 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.942 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:52.943 [2024-11-20 07:07:27.553945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.943 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:53.204 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:53.204 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:53.466 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:53.466 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:53.466 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:53.728 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=28c191a5-8629-4b2b-8337-b98c94966238 00:06:53.728 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 28c191a5-8629-4b2b-8337-b98c94966238 lvol 20 00:06:53.989 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=66ad3e2c-aa13-47e6-8a89-6e7b69ae61a5 00:06:53.989 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:53.989 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 66ad3e2c-aa13-47e6-8a89-6e7b69ae61a5 00:06:54.250 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:54.512 [2024-11-20 07:07:29.069980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.512 07:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:54.772 07:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1080010 00:06:54.772 07:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:54.772 07:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:55.714 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 66ad3e2c-aa13-47e6-8a89-6e7b69ae61a5 MY_SNAPSHOT 00:06:55.975 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=46ff0ad5-32af-4ea3-8174-d702335566b0 00:06:55.975 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 66ad3e2c-aa13-47e6-8a89-6e7b69ae61a5 30 00:06:56.237 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 46ff0ad5-32af-4ea3-8174-d702335566b0 MY_CLONE 00:06:56.237 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5604b356-68d4-4d4d-ae11-b425b678ae60 00:06:56.237 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5604b356-68d4-4d4d-ae11-b425b678ae60 00:06:56.807 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1080010 00:07:04.943 Initializing NVMe Controllers 00:07:04.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:04.943 Controller IO queue size 128, less than required. 00:07:04.943 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:04.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:04.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:04.943 Initialization complete. Launching workers. 00:07:04.943 ======================================================== 00:07:04.943 Latency(us) 00:07:04.943 Device Information : IOPS MiB/s Average min max 00:07:04.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12250.90 47.86 10452.90 1521.05 47321.40 00:07:04.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17738.40 69.29 7216.55 351.80 62093.64 00:07:04.943 ======================================================== 00:07:04.943 Total : 29989.30 117.15 8538.63 351.80 62093.64 00:07:04.943 00:07:04.943 07:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:05.203 07:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 66ad3e2c-aa13-47e6-8a89-6e7b69ae61a5 00:07:05.203 07:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 28c191a5-8629-4b2b-8337-b98c94966238 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:05.465 rmmod nvme_tcp 00:07:05.465 rmmod nvme_fabrics 00:07:05.465 rmmod nvme_keyring 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1079387 ']' 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1079387 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 1079387 ']' 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 1079387 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:05.465 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1079387 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1079387' 00:07:05.725 killing process with pid 1079387 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 1079387 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 1079387 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:05.725 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:05.726 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.726 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.726 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:08.270 00:07:08.270 real 0m24.678s 00:07:08.270 user 1m4.110s 00:07:08.270 sys 0m9.334s 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:08.270 ************************************ 00:07:08.270 END TEST nvmf_lvol 00:07:08.270 ************************************ 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:08.270 ************************************ 00:07:08.270 START TEST nvmf_lvs_grow 00:07:08.270 ************************************ 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:08.270 * Looking for test storage... 00:07:08.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.270 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:08.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.271 --rc genhtml_branch_coverage=1 00:07:08.271 --rc genhtml_function_coverage=1 00:07:08.271 --rc genhtml_legend=1 00:07:08.271 --rc geninfo_all_blocks=1 00:07:08.271 --rc geninfo_unexecuted_blocks=1 00:07:08.271 00:07:08.271 ' 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:08.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.271 --rc genhtml_branch_coverage=1 00:07:08.271 --rc genhtml_function_coverage=1 00:07:08.271 --rc genhtml_legend=1 00:07:08.271 --rc geninfo_all_blocks=1 00:07:08.271 --rc geninfo_unexecuted_blocks=1 00:07:08.271 00:07:08.271 ' 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:08.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.271 --rc genhtml_branch_coverage=1 00:07:08.271 --rc genhtml_function_coverage=1 00:07:08.271 --rc genhtml_legend=1 00:07:08.271 --rc geninfo_all_blocks=1 00:07:08.271 --rc geninfo_unexecuted_blocks=1 00:07:08.271 00:07:08.271 ' 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:08.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.271 --rc genhtml_branch_coverage=1 00:07:08.271 --rc genhtml_function_coverage=1 00:07:08.271 --rc genhtml_legend=1 00:07:08.271 --rc geninfo_all_blocks=1 00:07:08.271 --rc geninfo_unexecuted_blocks=1 00:07:08.271 00:07:08.271 ' 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.271 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:08.272 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:16.411 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:16.411 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.411 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:16.412 Found net devices under 0000:31:00.0: cvl_0_0 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:16.412 Found net devices under 0000:31:00.1: cvl_0_1 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:16.412 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:16.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:07:16.412 00:07:16.412 --- 10.0.0.2 ping statistics --- 00:07:16.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.412 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:07:16.412 00:07:16.412 --- 10.0.0.1 ping statistics --- 00:07:16.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.412 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1087057 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1087057 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 1087057 ']' 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:16.412 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.412 [2024-11-20 07:07:51.173389] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:07:16.412 [2024-11-20 07:07:51.173441] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.673 [2024-11-20 07:07:51.257193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.673 [2024-11-20 07:07:51.291660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.673 [2024-11-20 07:07:51.291693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.673 [2024-11-20 07:07:51.291701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.673 [2024-11-20 07:07:51.291708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.673 [2024-11-20 07:07:51.291713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.673 [2024-11-20 07:07:51.292275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.673 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:16.673 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:16.673 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:16.673 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.673 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.673 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.673 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:16.934 [2024-11-20 07:07:51.576400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.934 ************************************ 00:07:16.934 START TEST lvs_grow_clean 00:07:16.934 ************************************ 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:16.934 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:17.195 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:17.195 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:17.455 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4064d67c-dfcb-481b-bd1f-e340dba88e6c 00:07:17.455 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4064d67c-dfcb-481b-bd1f-e340dba88e6c 00:07:17.455 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:17.455 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:17.455 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:17.455 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4064d67c-dfcb-481b-bd1f-e340dba88e6c lvol 150 00:07:17.714 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a3ca5352-185f-4e17-a09d-556291fb888a 00:07:17.714 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:17.714 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:17.714 [2024-11-20 07:07:52.475555] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:17.714 [2024-11-20 07:07:52.475605] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:17.714 true 00:07:17.975 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4064d67c-dfcb-481b-bd1f-e340dba88e6c 00:07:17.975 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:17.975 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:17.975 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:18.236 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a3ca5352-185f-4e17-a09d-556291fb888a 00:07:18.236 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:18.495 [2024-11-20 07:07:53.153629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.495 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:18.754 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1087444 00:07:18.754 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:18.754 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:18.754 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1087444 /var/tmp/bdevperf.sock 00:07:18.754 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 1087444 ']' 00:07:18.754 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:18.754 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:18.755 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:18.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:18.755 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:18.755 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:18.755 [2024-11-20 07:07:53.383741] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:07:18.755 [2024-11-20 07:07:53.383795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087444 ] 00:07:18.755 [2024-11-20 07:07:53.478937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.755 [2024-11-20 07:07:53.514788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.692 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:19.692 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:19.692 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:19.953 Nvme0n1 00:07:19.953 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:19.953 [ 00:07:19.953 { 00:07:19.953 "name": "Nvme0n1", 00:07:19.953 "aliases": [ 00:07:19.953 "a3ca5352-185f-4e17-a09d-556291fb888a" 00:07:19.953 ], 00:07:19.953 "product_name": "NVMe disk", 00:07:19.953 "block_size": 4096, 00:07:19.953 "num_blocks": 38912, 00:07:19.953 "uuid": "a3ca5352-185f-4e17-a09d-556291fb888a", 00:07:19.953 "numa_id": 0, 00:07:19.953 "assigned_rate_limits": { 00:07:19.953 "rw_ios_per_sec": 0, 00:07:19.953 "rw_mbytes_per_sec": 0, 00:07:19.953 "r_mbytes_per_sec": 0, 00:07:19.953 "w_mbytes_per_sec": 0 00:07:19.953 }, 00:07:19.953 "claimed": false, 00:07:19.953 "zoned": false, 00:07:19.953 "supported_io_types": { 00:07:19.953 "read": true, 00:07:19.953 "write": true, 00:07:19.953 "unmap": true, 00:07:19.953 "flush": true, 00:07:19.953 "reset": true, 00:07:19.953 "nvme_admin": true, 00:07:19.953 "nvme_io": true, 00:07:19.953 "nvme_io_md": false, 00:07:19.953 "write_zeroes": true, 00:07:19.953 "zcopy": false, 00:07:19.953 "get_zone_info": false, 00:07:19.953 "zone_management": false, 00:07:19.953 "zone_append": false, 00:07:19.953 "compare": true, 00:07:19.953 "compare_and_write": true, 00:07:19.953 "abort": true, 00:07:19.953 "seek_hole": false, 00:07:19.953 "seek_data": false, 00:07:19.953 "copy": true, 00:07:19.953 "nvme_iov_md": false 00:07:19.953 }, 00:07:19.953 "memory_domains": [ 00:07:19.953 { 00:07:19.953 "dma_device_id": "system", 00:07:19.953 "dma_device_type": 1 00:07:19.953 } 00:07:19.953 ], 00:07:19.953 "driver_specific": { 00:07:19.953 "nvme": [ 00:07:19.953 { 00:07:19.953 "trid": { 00:07:19.953 "trtype": "TCP", 00:07:19.953 "adrfam": "IPv4", 00:07:19.953 "traddr": "10.0.0.2", 00:07:19.953 "trsvcid": "4420", 00:07:19.953 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:19.953 }, 00:07:19.953 "ctrlr_data": { 00:07:19.953 "cntlid": 1, 00:07:19.953 "vendor_id": "0x8086", 00:07:19.953 "model_number": "SPDK bdev Controller", 00:07:19.953 "serial_number": "SPDK0", 00:07:19.953 "firmware_revision": "25.01", 00:07:19.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:19.953 "oacs": { 00:07:19.953 "security": 0, 00:07:19.953 "format": 0, 00:07:19.953 "firmware": 0, 00:07:19.953 "ns_manage": 0 00:07:19.953 }, 00:07:19.953 "multi_ctrlr": true, 00:07:19.953 "ana_reporting": false 00:07:19.953 }, 00:07:19.953 "vs": { 00:07:19.953 "nvme_version": "1.3" 00:07:19.953 }, 00:07:19.953 "ns_data": { 00:07:19.953 "id": 1, 00:07:19.953 "can_share": true 00:07:19.953 } 00:07:19.953 } 00:07:19.953 ], 00:07:19.953 "mp_policy": "active_passive" 00:07:19.953 } 00:07:19.953 } 00:07:19.953 ] 00:07:19.953 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1087784 00:07:19.953 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:19.953 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:20.214 Running I/O for 10 seconds... 00:07:21.156 Latency(us) 00:07:21.156 [2024-11-20T06:07:55.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.156 Nvme0n1 : 1.00 17717.00 69.21 0.00 0.00 0.00 0.00 0.00 00:07:21.156 [2024-11-20T06:07:55.923Z] =================================================================================================================== 00:07:21.156 [2024-11-20T06:07:55.923Z] Total : 17717.00 69.21 0.00 0.00 0.00 0.00 0.00 00:07:21.156 00:07:22.097 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4064d67c-dfcb-481b-bd1f-e340dba88e6c 00:07:22.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.097 Nvme0n1 : 2.00 17840.50 69.69 0.00 0.00 0.00 0.00 0.00 00:07:22.097 [2024-11-20T06:07:56.864Z] =================================================================================================================== 00:07:22.097 [2024-11-20T06:07:56.864Z] Total : 17840.50 69.69 0.00 0.00 0.00 0.00 0.00 00:07:22.097 00:07:22.097 true 00:07:22.358 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4064d67c-dfcb-481b-bd1f-e340dba88e6c 00:07:22.358 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:22.358 07:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:22.358 07:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:22.358 07:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1087784 00:07:23.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.300 Nvme0n1 : 3.00 17897.67 69.91 0.00 0.00 0.00 0.00 0.00 00:07:23.300 [2024-11-20T06:07:58.067Z] =================================================================================================================== 00:07:23.300 [2024-11-20T06:07:58.067Z] Total : 17897.67 69.91 0.00 0.00 0.00 0.00 0.00 00:07:23.300 00:07:24.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.241 Nvme0n1 : 4.00 17915.25 69.98 0.00 0.00 0.00 0.00 0.00 00:07:24.241 [2024-11-20T06:07:59.008Z] =================================================================================================================== 00:07:24.241 [2024-11-20T06:07:59.008Z] Total : 17915.25 69.98 0.00 0.00 0.00 0.00 0.00 00:07:24.241 00:07:25.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.181 Nvme0n1 : 5.00 17953.40 70.13 0.00 0.00 0.00 0.00 0.00 00:07:25.181 [2024-11-20T06:07:59.948Z] =================================================================================================================== 00:07:25.181 [2024-11-20T06:07:59.948Z] Total : 17953.40 70.13 0.00 0.00 0.00 0.00 0.00 00:07:25.181 00:07:26.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.142 Nvme0n1 : 6.00 17974.50 70.21 0.00 0.00 0.00 0.00 0.00 00:07:26.142 [2024-11-20T06:08:00.909Z] =================================================================================================================== 00:07:26.142 [2024-11-20T06:08:00.909Z] Total : 17974.50 70.21 0.00 0.00 0.00 0.00 0.00 00:07:26.142 00:07:27.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.082 Nvme0n1 : 7.00 17992.29 70.28 0.00 0.00 0.00 0.00 0.00 00:07:27.082 [2024-11-20T06:08:01.849Z] =================================================================================================================== 00:07:27.082 [2024-11-20T06:08:01.849Z] Total : 17992.29 70.28 0.00 0.00 0.00 0.00 0.00 00:07:27.082 00:07:28.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.024 Nvme0n1 : 8.00 18006.25 70.34 0.00 0.00 0.00 0.00 0.00 00:07:28.024 [2024-11-20T06:08:02.791Z] =================================================================================================================== 00:07:28.024 [2024-11-20T06:08:02.791Z] Total : 18006.25 70.34 0.00 0.00 0.00 0.00 0.00 00:07:28.024 00:07:29.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.409 Nvme0n1 : 9.00 18020.89 70.39 0.00 0.00 0.00 0.00 0.00 00:07:29.409 [2024-11-20T06:08:04.176Z] =================================================================================================================== 00:07:29.409 [2024-11-20T06:08:04.176Z] Total : 18020.89 70.39 0.00 0.00 0.00 0.00 0.00 00:07:29.409 00:07:30.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.350 Nvme0n1 : 10.00 18026.00 70.41 0.00 0.00 0.00 0.00 0.00 00:07:30.350 [2024-11-20T06:08:05.117Z] =================================================================================================================== 00:07:30.350 [2024-11-20T06:08:05.117Z] Total : 18026.00 70.41 0.00 0.00 0.00 0.00 0.00 00:07:30.350 00:07:30.350 00:07:30.350 Latency(us) 00:07:30.350 [2024-11-20T06:08:05.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.350 Nvme0n1 : 10.00 18029.79 70.43 0.00 0.00 7096.91 4205.23 13052.59 00:07:30.350 [2024-11-20T06:08:05.117Z] =================================================================================================================== 00:07:30.350 [2024-11-20T06:08:05.117Z] Total : 18029.79 70.43 0.00 0.00 7096.91 4205.23 13052.59 00:07:30.350 { 00:07:30.350 "results": [ 00:07:30.350 { 00:07:30.350 "job": "Nvme0n1", 00:07:30.350 "core_mask": "0x2", 00:07:30.350 "workload": "randwrite", 00:07:30.350 "status": "finished", 00:07:30.350 "queue_depth": 128, 00:07:30.350 "io_size": 4096, 00:07:30.350 "runtime": 10.004998, 00:07:30.350 "iops": 18029.78871160194, 00:07:30.350 "mibps": 70.42886215469508, 00:07:30.350 "io_failed": 0, 00:07:30.350 "io_timeout": 0, 00:07:30.350 "avg_latency_us": 7096.907495694466, 00:07:30.350 "min_latency_us": 4205.2266666666665, 00:07:30.350 "max_latency_us": 13052.586666666666 00:07:30.350 } 00:07:30.350 ], 00:07:30.350 "core_count": 1 00:07:30.350 } 00:07:30.350 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1087444 00:07:30.350 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 1087444 ']' 00:07:30.350 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 1087444 00:07:30.350 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:30.350 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:30.350 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1087444 00:07:30.350 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:30.350 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:30.350 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1087444' 00:07:30.350 killing process with pid 1087444 00:07:30.350 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 1087444 00:07:30.350 Received shutdown signal, test time was about 10.000000 seconds 00:07:30.350 00:07:30.350 Latency(us) 00:07:30.350 [2024-11-20T06:08:05.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.350 [2024-11-20T06:08:05.117Z] =================================================================================================================== 00:07:30.350 [2024-11-20T06:08:05.117Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:30.350 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 1087444 00:07:30.350 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.610 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:30.610 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4064d67c-dfcb-481b-bd1f-e340dba88e6c 00:07:30.610 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:30.870 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:30.870 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:30.870 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:31.176 [2024-11-20 07:08:05.678432] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4064d67c-dfcb-481b-bd1f-e340dba88e6c 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4064d67c-dfcb-481b-bd1f-e340dba88e6c 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4064d67c-dfcb-481b-bd1f-e340dba88e6c 00:07:31.176 request: 00:07:31.176 { 00:07:31.176 "uuid": "4064d67c-dfcb-481b-bd1f-e340dba88e6c", 00:07:31.176 "method": "bdev_lvol_get_lvstores", 00:07:31.176 "req_id": 1 00:07:31.176 } 00:07:31.176 Got JSON-RPC error response 00:07:31.176 response: 00:07:31.176 { 00:07:31.176 "code": -19, 00:07:31.176 "message": "No such device" 00:07:31.176 } 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:31.176 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:31.479 aio_bdev 00:07:31.479 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a3ca5352-185f-4e17-a09d-556291fb888a 00:07:31.479 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=a3ca5352-185f-4e17-a09d-556291fb888a 00:07:31.479 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:31.479 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:31.479 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:31.479 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:31.479 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:31.479 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a3ca5352-185f-4e17-a09d-556291fb888a -t 2000 00:07:31.751 [ 00:07:31.751 { 00:07:31.751 "name": "a3ca5352-185f-4e17-a09d-556291fb888a", 00:07:31.751 "aliases": [ 00:07:31.751 "lvs/lvol" 00:07:31.751 ], 00:07:31.751 "product_name": "Logical Volume", 00:07:31.751 "block_size": 4096, 00:07:31.751 "num_blocks": 38912, 00:07:31.751 "uuid": "a3ca5352-185f-4e17-a09d-556291fb888a", 00:07:31.751 "assigned_rate_limits": { 00:07:31.751 "rw_ios_per_sec": 0, 00:07:31.751 "rw_mbytes_per_sec": 0, 00:07:31.751 "r_mbytes_per_sec": 0, 00:07:31.751 "w_mbytes_per_sec": 0 00:07:31.751 }, 00:07:31.751 "claimed": false, 00:07:31.751 "zoned": false, 00:07:31.751 "supported_io_types": { 00:07:31.751 "read": true, 00:07:31.751 "write": true, 00:07:31.751 "unmap": true, 00:07:31.751 "flush": false, 00:07:31.751 "reset": true, 00:07:31.751 "nvme_admin": false, 00:07:31.751 "nvme_io": false, 00:07:31.751 "nvme_io_md": false, 00:07:31.752 "write_zeroes": true, 00:07:31.752 "zcopy": false, 00:07:31.752 "get_zone_info": false, 00:07:31.752 "zone_management": false, 00:07:31.752 "zone_append": false, 00:07:31.752 "compare": false, 00:07:31.752 "compare_and_write": false, 00:07:31.752 "abort": false, 00:07:31.752 "seek_hole": true, 00:07:31.752 "seek_data": true, 00:07:31.752 "copy": false, 00:07:31.752 "nvme_iov_md": false 00:07:31.752 }, 00:07:31.752 "driver_specific": { 00:07:31.752 "lvol": { 00:07:31.752 "lvol_store_uuid": "4064d67c-dfcb-481b-bd1f-e340dba88e6c", 00:07:31.752 "base_bdev": "aio_bdev", 00:07:31.752 "thin_provision": false, 00:07:31.752 "num_allocated_clusters": 38, 00:07:31.752 "snapshot": false, 00:07:31.752 "clone": false, 00:07:31.752 "esnap_clone": false 00:07:31.752 } 00:07:31.752 } 00:07:31.752 } 00:07:31.752 ] 00:07:31.752 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:31.752 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4064d67c-dfcb-481b-bd1f-e340dba88e6c 00:07:31.752 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:32.020 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:32.020 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4064d67c-dfcb-481b-bd1f-e340dba88e6c 00:07:32.020 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:32.020 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:32.020 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a3ca5352-185f-4e17-a09d-556291fb888a 00:07:32.281 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4064d67c-dfcb-481b-bd1f-e340dba88e6c 00:07:32.541 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:32.541 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:32.541 00:07:32.541 real 0m15.636s 00:07:32.541 user 0m15.378s 00:07:32.541 sys 0m1.327s 00:07:32.541 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.541 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:32.541 ************************************ 00:07:32.541 END TEST lvs_grow_clean 00:07:32.541 ************************************ 00:07:32.541 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:32.541 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:32.812 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.812 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.812 ************************************ 00:07:32.812 START TEST lvs_grow_dirty 00:07:32.812 ************************************ 00:07:32.812 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:32.812 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:32.812 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:32.812 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:32.812 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:32.812 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:32.812 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:32.812 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:32.812 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:32.812 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:33.077 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:33.077 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:33.077 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:33.077 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:33.077 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:33.337 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:33.337 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:33.337 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bce1a276-6aad-4715-8e21-1418d8b3a29e lvol 150 00:07:33.337 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=199c9fc7-47cc-4da3-8245-9298e45226f4 00:07:33.337 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:33.337 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:33.599 [2024-11-20 07:08:08.252101] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:33.599 [2024-11-20 07:08:08.252153] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:33.599 true 00:07:33.599 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:33.599 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:33.860 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:33.860 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:33.860 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 199c9fc7-47cc-4da3-8245-9298e45226f4 00:07:34.121 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:34.382 [2024-11-20 07:08:08.926153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.382 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.382 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1090559 00:07:34.382 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:34.382 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:34.382 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1090559 /var/tmp/bdevperf.sock 00:07:34.382 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1090559 ']' 00:07:34.382 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:34.382 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:34.382 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:34.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:34.382 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:34.382 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:34.382 [2024-11-20 07:08:09.142250] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:07:34.382 [2024-11-20 07:08:09.142303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090559 ] 00:07:34.647 [2024-11-20 07:08:09.234353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.647 [2024-11-20 07:08:09.264874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.220 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:35.220 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:35.220 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:35.791 Nvme0n1 00:07:35.791 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:35.791 [ 00:07:35.791 { 00:07:35.791 "name": "Nvme0n1", 00:07:35.791 "aliases": [ 00:07:35.791 "199c9fc7-47cc-4da3-8245-9298e45226f4" 00:07:35.791 ], 00:07:35.791 "product_name": "NVMe disk", 00:07:35.791 "block_size": 4096, 00:07:35.791 "num_blocks": 38912, 00:07:35.791 "uuid": "199c9fc7-47cc-4da3-8245-9298e45226f4", 00:07:35.791 "numa_id": 0, 00:07:35.791 "assigned_rate_limits": { 00:07:35.791 "rw_ios_per_sec": 0, 00:07:35.791 "rw_mbytes_per_sec": 0, 00:07:35.791 "r_mbytes_per_sec": 0, 00:07:35.791 "w_mbytes_per_sec": 0 00:07:35.791 }, 00:07:35.791 "claimed": false, 00:07:35.791 "zoned": false, 00:07:35.791 "supported_io_types": { 00:07:35.791 "read": true, 00:07:35.791 "write": true, 00:07:35.791 "unmap": true, 00:07:35.791 "flush": true, 00:07:35.791 "reset": true, 00:07:35.791 "nvme_admin": true, 00:07:35.791 "nvme_io": true, 00:07:35.791 "nvme_io_md": false, 00:07:35.791 "write_zeroes": true, 00:07:35.791 "zcopy": false, 00:07:35.791 "get_zone_info": false, 00:07:35.791 "zone_management": false, 00:07:35.791 "zone_append": false, 00:07:35.791 "compare": true, 00:07:35.791 "compare_and_write": true, 00:07:35.791 "abort": true, 00:07:35.791 "seek_hole": false, 00:07:35.791 "seek_data": false, 00:07:35.791 "copy": true, 00:07:35.791 "nvme_iov_md": false 00:07:35.791 }, 00:07:35.791 "memory_domains": [ 00:07:35.791 { 00:07:35.791 "dma_device_id": "system", 00:07:35.791 "dma_device_type": 1 00:07:35.791 } 00:07:35.791 ], 00:07:35.791 "driver_specific": { 00:07:35.791 "nvme": [ 00:07:35.791 { 00:07:35.791 "trid": { 00:07:35.791 "trtype": "TCP", 00:07:35.791 "adrfam": "IPv4", 00:07:35.791 "traddr": "10.0.0.2", 00:07:35.791 "trsvcid": "4420", 00:07:35.791 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:35.791 }, 00:07:35.791 "ctrlr_data": { 00:07:35.791 "cntlid": 1, 00:07:35.791 "vendor_id": "0x8086", 00:07:35.791 "model_number": "SPDK bdev Controller", 00:07:35.791 "serial_number": "SPDK0", 00:07:35.791 "firmware_revision": "25.01", 00:07:35.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:35.791 "oacs": { 00:07:35.791 "security": 0, 00:07:35.791 "format": 0, 00:07:35.791 "firmware": 0, 00:07:35.791 "ns_manage": 0 00:07:35.791 }, 00:07:35.791 "multi_ctrlr": true, 00:07:35.791 "ana_reporting": false 00:07:35.791 }, 00:07:35.791 "vs": { 00:07:35.791 "nvme_version": "1.3" 00:07:35.791 }, 00:07:35.791 "ns_data": { 00:07:35.791 "id": 1, 00:07:35.791 "can_share": true 00:07:35.791 } 00:07:35.791 } 00:07:35.791 ], 00:07:35.791 "mp_policy": "active_passive" 00:07:35.791 } 00:07:35.791 } 00:07:35.791 ] 00:07:35.791 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1090887 00:07:35.791 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:35.791 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:35.791 Running I/O for 10 seconds... 00:07:37.177 Latency(us) 00:07:37.177 [2024-11-20T06:08:11.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.177 Nvme0n1 : 1.00 17726.00 69.24 0.00 0.00 0.00 0.00 0.00 00:07:37.177 [2024-11-20T06:08:11.944Z] =================================================================================================================== 00:07:37.177 [2024-11-20T06:08:11.944Z] Total : 17726.00 69.24 0.00 0.00 0.00 0.00 0.00 00:07:37.177 00:07:37.747 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:38.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.008 Nvme0n1 : 2.00 17881.00 69.85 0.00 0.00 0.00 0.00 0.00 00:07:38.008 [2024-11-20T06:08:12.775Z] =================================================================================================================== 00:07:38.008 [2024-11-20T06:08:12.775Z] Total : 17881.00 69.85 0.00 0.00 0.00 0.00 0.00 00:07:38.008 00:07:38.008 true 00:07:38.008 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:38.008 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:38.268 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:38.268 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:38.268 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1090887 00:07:38.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.839 Nvme0n1 : 3.00 17930.33 70.04 0.00 0.00 0.00 0.00 0.00 00:07:38.839 [2024-11-20T06:08:13.606Z] =================================================================================================================== 00:07:38.839 [2024-11-20T06:08:13.606Z] Total : 17930.33 70.04 0.00 0.00 0.00 0.00 0.00 00:07:38.839 00:07:40.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.222 Nvme0n1 : 4.00 17957.00 70.14 0.00 0.00 0.00 0.00 0.00 00:07:40.222 [2024-11-20T06:08:14.990Z] =================================================================================================================== 00:07:40.223 [2024-11-20T06:08:14.990Z] Total : 17957.00 70.14 0.00 0.00 0.00 0.00 0.00 00:07:40.223 00:07:41.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.163 Nvme0n1 : 5.00 17985.00 70.25 0.00 0.00 0.00 0.00 0.00 00:07:41.163 [2024-11-20T06:08:15.930Z] =================================================================================================================== 00:07:41.163 [2024-11-20T06:08:15.930Z] Total : 17985.00 70.25 0.00 0.00 0.00 0.00 0.00 00:07:41.163 00:07:42.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.103 Nvme0n1 : 6.00 18003.17 70.32 0.00 0.00 0.00 0.00 0.00 00:07:42.103 [2024-11-20T06:08:16.870Z] =================================================================================================================== 00:07:42.103 [2024-11-20T06:08:16.870Z] Total : 18003.17 70.32 0.00 0.00 0.00 0.00 0.00 00:07:42.103 00:07:43.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.044 Nvme0n1 : 7.00 18025.29 70.41 0.00 0.00 0.00 0.00 0.00 00:07:43.044 [2024-11-20T06:08:17.811Z] =================================================================================================================== 00:07:43.044 [2024-11-20T06:08:17.811Z] Total : 18025.29 70.41 0.00 0.00 0.00 0.00 0.00 00:07:43.044 00:07:43.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.985 Nvme0n1 : 8.00 18034.38 70.45 0.00 0.00 0.00 0.00 0.00 00:07:43.985 [2024-11-20T06:08:18.752Z] =================================================================================================================== 00:07:43.985 [2024-11-20T06:08:18.752Z] Total : 18034.38 70.45 0.00 0.00 0.00 0.00 0.00 00:07:43.985 00:07:44.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.926 Nvme0n1 : 9.00 18048.00 70.50 0.00 0.00 0.00 0.00 0.00 00:07:44.926 [2024-11-20T06:08:19.693Z] =================================================================================================================== 00:07:44.926 [2024-11-20T06:08:19.693Z] Total : 18048.00 70.50 0.00 0.00 0.00 0.00 0.00 00:07:44.926 00:07:45.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.869 Nvme0n1 : 10.00 18052.20 70.52 0.00 0.00 0.00 0.00 0.00 00:07:45.869 [2024-11-20T06:08:20.636Z] =================================================================================================================== 00:07:45.869 [2024-11-20T06:08:20.636Z] Total : 18052.20 70.52 0.00 0.00 0.00 0.00 0.00 00:07:45.869 00:07:45.869 00:07:45.869 Latency(us) 00:07:45.869 [2024-11-20T06:08:20.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.869 Nvme0n1 : 10.00 18056.87 70.53 0.00 0.00 7085.71 2744.32 13707.95 00:07:45.869 [2024-11-20T06:08:20.636Z] =================================================================================================================== 00:07:45.869 [2024-11-20T06:08:20.636Z] Total : 18056.87 70.53 0.00 0.00 7085.71 2744.32 13707.95 00:07:45.869 { 00:07:45.869 "results": [ 00:07:45.869 { 00:07:45.869 "job": "Nvme0n1", 00:07:45.869 "core_mask": "0x2", 00:07:45.869 "workload": "randwrite", 00:07:45.869 "status": "finished", 00:07:45.869 "queue_depth": 128, 00:07:45.869 "io_size": 4096, 00:07:45.869 "runtime": 10.0045, 00:07:45.869 "iops": 18056.87440651707, 00:07:45.869 "mibps": 70.5346656504573, 00:07:45.869 "io_failed": 0, 00:07:45.870 "io_timeout": 0, 00:07:45.870 "avg_latency_us": 7085.711325989481, 00:07:45.870 "min_latency_us": 2744.32, 00:07:45.870 "max_latency_us": 13707.946666666667 00:07:45.870 } 00:07:45.870 ], 00:07:45.870 "core_count": 1 00:07:45.870 } 00:07:45.870 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1090559 00:07:45.870 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 1090559 ']' 00:07:45.870 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 1090559 00:07:45.870 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:45.870 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:45.870 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1090559 00:07:46.130 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:46.130 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:46.130 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1090559' 00:07:46.130 killing process with pid 1090559 00:07:46.130 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 1090559 00:07:46.130 Received shutdown signal, test time was about 10.000000 seconds 00:07:46.130 00:07:46.130 Latency(us) 00:07:46.130 [2024-11-20T06:08:20.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.130 [2024-11-20T06:08:20.897Z] =================================================================================================================== 00:07:46.130 [2024-11-20T06:08:20.897Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:46.130 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 1090559 00:07:46.130 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:46.391 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:46.652 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:46.652 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:46.652 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:46.652 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:46.652 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1087057 00:07:46.652 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1087057 00:07:46.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1087057 Killed "${NVMF_APP[@]}" "$@" 00:07:46.652 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:46.652 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:46.652 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:46.652 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:46.652 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:46.912 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1093146 00:07:46.912 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1093146 00:07:46.912 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:46.912 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1093146 ']' 00:07:46.912 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.912 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:46.912 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.912 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:46.912 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:46.912 [2024-11-20 07:08:21.473838] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:07:46.912 [2024-11-20 07:08:21.473901] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.912 [2024-11-20 07:08:21.560285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.912 [2024-11-20 07:08:21.597264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.912 [2024-11-20 07:08:21.597301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.912 [2024-11-20 07:08:21.597309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.912 [2024-11-20 07:08:21.597316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.912 [2024-11-20 07:08:21.597322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.912 [2024-11-20 07:08:21.597923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.853 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:47.853 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:47.853 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.853 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:47.853 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:47.853 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.853 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.853 [2024-11-20 07:08:22.453844] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:47.853 [2024-11-20 07:08:22.453938] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:47.853 [2024-11-20 07:08:22.453970] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:47.853 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:47.853 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 199c9fc7-47cc-4da3-8245-9298e45226f4 00:07:47.853 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=199c9fc7-47cc-4da3-8245-9298e45226f4 00:07:47.854 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:47.854 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:47.854 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:47.854 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:47.854 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:48.114 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 199c9fc7-47cc-4da3-8245-9298e45226f4 -t 2000 00:07:48.114 [ 00:07:48.114 { 00:07:48.114 "name": "199c9fc7-47cc-4da3-8245-9298e45226f4", 00:07:48.114 "aliases": [ 00:07:48.114 "lvs/lvol" 00:07:48.114 ], 00:07:48.114 "product_name": "Logical Volume", 00:07:48.114 "block_size": 4096, 00:07:48.114 "num_blocks": 38912, 00:07:48.114 "uuid": "199c9fc7-47cc-4da3-8245-9298e45226f4", 00:07:48.114 "assigned_rate_limits": { 00:07:48.114 "rw_ios_per_sec": 0, 00:07:48.114 "rw_mbytes_per_sec": 0, 00:07:48.114 "r_mbytes_per_sec": 0, 00:07:48.114 "w_mbytes_per_sec": 0 00:07:48.114 }, 00:07:48.114 "claimed": false, 00:07:48.114 "zoned": false, 00:07:48.114 "supported_io_types": { 00:07:48.114 "read": true, 00:07:48.114 "write": true, 00:07:48.114 "unmap": true, 00:07:48.114 "flush": false, 00:07:48.114 "reset": true, 00:07:48.114 "nvme_admin": false, 00:07:48.114 "nvme_io": false, 00:07:48.114 "nvme_io_md": false, 00:07:48.114 "write_zeroes": true, 00:07:48.114 "zcopy": false, 00:07:48.114 "get_zone_info": false, 00:07:48.114 "zone_management": false, 00:07:48.114 "zone_append": false, 00:07:48.114 "compare": false, 00:07:48.114 "compare_and_write": false, 00:07:48.114 "abort": false, 00:07:48.114 "seek_hole": true, 00:07:48.114 "seek_data": true, 00:07:48.114 "copy": false, 00:07:48.114 "nvme_iov_md": false 00:07:48.114 }, 00:07:48.114 "driver_specific": { 00:07:48.114 "lvol": { 00:07:48.114 "lvol_store_uuid": "bce1a276-6aad-4715-8e21-1418d8b3a29e", 00:07:48.114 "base_bdev": "aio_bdev", 00:07:48.114 "thin_provision": false, 00:07:48.114 "num_allocated_clusters": 38, 00:07:48.114 "snapshot": false, 00:07:48.114 "clone": false, 00:07:48.114 "esnap_clone": false 00:07:48.114 } 00:07:48.114 } 00:07:48.114 } 00:07:48.114 ] 00:07:48.114 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:48.114 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:48.114 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:48.375 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:48.375 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:48.375 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:48.636 [2024-11-20 07:08:23.326045] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:48.636 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:48.896 request: 00:07:48.896 { 00:07:48.896 "uuid": "bce1a276-6aad-4715-8e21-1418d8b3a29e", 00:07:48.896 "method": "bdev_lvol_get_lvstores", 00:07:48.896 "req_id": 1 00:07:48.896 } 00:07:48.896 Got JSON-RPC error response 00:07:48.896 response: 00:07:48.896 { 00:07:48.896 "code": -19, 00:07:48.896 "message": "No such device" 00:07:48.896 } 00:07:48.896 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:48.896 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.896 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.896 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.896 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:49.158 aio_bdev 00:07:49.158 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 199c9fc7-47cc-4da3-8245-9298e45226f4 00:07:49.158 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=199c9fc7-47cc-4da3-8245-9298e45226f4 00:07:49.158 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:49.158 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:49.158 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:49.158 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:49.158 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:49.158 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 199c9fc7-47cc-4da3-8245-9298e45226f4 -t 2000 00:07:49.419 [ 00:07:49.419 { 00:07:49.419 "name": "199c9fc7-47cc-4da3-8245-9298e45226f4", 00:07:49.419 "aliases": [ 00:07:49.419 "lvs/lvol" 00:07:49.419 ], 00:07:49.419 "product_name": "Logical Volume", 00:07:49.419 "block_size": 4096, 00:07:49.419 "num_blocks": 38912, 00:07:49.419 "uuid": "199c9fc7-47cc-4da3-8245-9298e45226f4", 00:07:49.419 "assigned_rate_limits": { 00:07:49.419 "rw_ios_per_sec": 0, 00:07:49.419 "rw_mbytes_per_sec": 0, 00:07:49.419 "r_mbytes_per_sec": 0, 00:07:49.419 "w_mbytes_per_sec": 0 00:07:49.419 }, 00:07:49.419 "claimed": false, 00:07:49.419 "zoned": false, 00:07:49.419 "supported_io_types": { 00:07:49.419 "read": true, 00:07:49.419 "write": true, 00:07:49.419 "unmap": true, 00:07:49.419 "flush": false, 00:07:49.419 "reset": true, 00:07:49.419 "nvme_admin": false, 00:07:49.419 "nvme_io": false, 00:07:49.419 "nvme_io_md": false, 00:07:49.419 "write_zeroes": true, 00:07:49.419 "zcopy": false, 00:07:49.419 "get_zone_info": false, 00:07:49.419 "zone_management": false, 00:07:49.419 "zone_append": false, 00:07:49.419 "compare": false, 00:07:49.419 "compare_and_write": false, 00:07:49.419 "abort": false, 00:07:49.419 "seek_hole": true, 00:07:49.419 "seek_data": true, 00:07:49.419 "copy": false, 00:07:49.419 "nvme_iov_md": false 00:07:49.419 }, 00:07:49.419 "driver_specific": { 00:07:49.419 "lvol": { 00:07:49.419 "lvol_store_uuid": "bce1a276-6aad-4715-8e21-1418d8b3a29e", 00:07:49.419 "base_bdev": "aio_bdev", 00:07:49.419 "thin_provision": false, 00:07:49.419 "num_allocated_clusters": 38, 00:07:49.419 "snapshot": false, 00:07:49.419 "clone": false, 00:07:49.419 "esnap_clone": false 00:07:49.419 } 00:07:49.419 } 00:07:49.419 } 00:07:49.419 ] 00:07:49.419 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:49.419 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:49.419 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:49.683 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:49.683 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:49.683 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:49.683 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:49.683 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 199c9fc7-47cc-4da3-8245-9298e45226f4 00:07:49.953 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bce1a276-6aad-4715-8e21-1418d8b3a29e 00:07:50.213 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:50.214 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.214 00:07:50.214 real 0m17.588s 00:07:50.214 user 0m45.203s 00:07:50.214 sys 0m2.841s 00:07:50.214 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.214 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:50.214 ************************************ 00:07:50.214 END TEST lvs_grow_dirty 00:07:50.214 ************************************ 00:07:50.214 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:50.214 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:50.214 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:50.214 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:50.214 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:50.475 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:50.475 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:50.475 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:50.475 07:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:50.475 nvmf_trace.0 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.475 rmmod nvme_tcp 00:07:50.475 rmmod nvme_fabrics 00:07:50.475 rmmod nvme_keyring 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1093146 ']' 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1093146 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 1093146 ']' 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 1093146 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1093146 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1093146' 00:07:50.475 killing process with pid 1093146 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 1093146 00:07:50.475 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 1093146 00:07:50.736 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.736 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.736 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.736 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:50.736 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:50.736 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.736 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:50.736 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.736 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:50.736 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.736 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.736 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.650 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:52.650 00:07:52.650 real 0m44.774s 00:07:52.650 user 1m6.937s 00:07:52.650 sys 0m10.907s 00:07:52.650 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:52.650 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.650 ************************************ 00:07:52.650 END TEST nvmf_lvs_grow 00:07:52.650 ************************************ 00:07:52.650 07:08:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:52.650 07:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:52.650 07:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:52.650 07:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.911 ************************************ 00:07:52.911 START TEST nvmf_bdev_io_wait 00:07:52.911 ************************************ 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:52.911 * Looking for test storage... 00:07:52.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:52.911 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:52.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.912 --rc genhtml_branch_coverage=1 00:07:52.912 --rc genhtml_function_coverage=1 00:07:52.912 --rc genhtml_legend=1 00:07:52.912 --rc geninfo_all_blocks=1 00:07:52.912 --rc geninfo_unexecuted_blocks=1 00:07:52.912 00:07:52.912 ' 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:52.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.912 --rc genhtml_branch_coverage=1 00:07:52.912 --rc genhtml_function_coverage=1 00:07:52.912 --rc genhtml_legend=1 00:07:52.912 --rc geninfo_all_blocks=1 00:07:52.912 --rc geninfo_unexecuted_blocks=1 00:07:52.912 00:07:52.912 ' 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:52.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.912 --rc genhtml_branch_coverage=1 00:07:52.912 --rc genhtml_function_coverage=1 00:07:52.912 --rc genhtml_legend=1 00:07:52.912 --rc geninfo_all_blocks=1 00:07:52.912 --rc geninfo_unexecuted_blocks=1 00:07:52.912 00:07:52.912 ' 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:52.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.912 --rc genhtml_branch_coverage=1 00:07:52.912 --rc genhtml_function_coverage=1 00:07:52.912 --rc genhtml_legend=1 00:07:52.912 --rc geninfo_all_blocks=1 00:07:52.912 --rc geninfo_unexecuted_blocks=1 00:07:52.912 00:07:52.912 ' 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:52.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.912 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.174 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:53.174 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:53.174 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:53.174 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:01.321 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:01.322 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:01.322 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:01.322 Found net devices under 0000:31:00.0: cvl_0_0 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:01.322 Found net devices under 0000:31:00.1: cvl_0_1 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.322 07:08:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.322 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.322 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.322 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:01.322 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.583 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.583 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.583 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:01.583 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:01.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:08:01.583 00:08:01.583 --- 10.0.0.2 ping statistics --- 00:08:01.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.583 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:08:01.583 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:08:01.583 00:08:01.583 --- 10.0.0.1 ping statistics --- 00:08:01.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.583 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:08:01.583 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.583 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:01.583 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:01.583 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.583 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:01.583 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1098676 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1098676 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 1098676 ']' 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:01.584 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:01.584 [2024-11-20 07:08:36.297304] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:08:01.584 [2024-11-20 07:08:36.297369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.845 [2024-11-20 07:08:36.388143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.845 [2024-11-20 07:08:36.430492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.845 [2024-11-20 07:08:36.430532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.845 [2024-11-20 07:08:36.430540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.845 [2024-11-20 07:08:36.430547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.845 [2024-11-20 07:08:36.430553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.845 [2024-11-20 07:08:36.432181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.845 [2024-11-20 07:08:36.432303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.845 [2024-11-20 07:08:36.432459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.845 [2024-11-20 07:08:36.432459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.417 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:02.417 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:02.417 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:02.417 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.417 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.417 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.417 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:02.417 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.417 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.417 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.417 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:02.417 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.417 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.678 [2024-11-20 07:08:37.212813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.678 Malloc0 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.678 [2024-11-20 07:08:37.272000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1099024 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1099026 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:02.678 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:02.678 { 00:08:02.678 "params": { 00:08:02.678 "name": "Nvme$subsystem", 00:08:02.678 "trtype": "$TEST_TRANSPORT", 00:08:02.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.678 "adrfam": "ipv4", 00:08:02.678 "trsvcid": "$NVMF_PORT", 00:08:02.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.678 "hdgst": ${hdgst:-false}, 00:08:02.678 "ddgst": ${ddgst:-false} 00:08:02.678 }, 00:08:02.679 "method": "bdev_nvme_attach_controller" 00:08:02.679 } 00:08:02.679 EOF 00:08:02.679 )") 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1099028 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1099031 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:02.679 { 00:08:02.679 "params": { 00:08:02.679 "name": "Nvme$subsystem", 00:08:02.679 "trtype": "$TEST_TRANSPORT", 00:08:02.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.679 "adrfam": "ipv4", 00:08:02.679 "trsvcid": "$NVMF_PORT", 00:08:02.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.679 "hdgst": ${hdgst:-false}, 00:08:02.679 "ddgst": ${ddgst:-false} 00:08:02.679 }, 00:08:02.679 "method": "bdev_nvme_attach_controller" 00:08:02.679 } 00:08:02.679 EOF 00:08:02.679 )") 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:02.679 { 00:08:02.679 "params": { 00:08:02.679 "name": "Nvme$subsystem", 00:08:02.679 "trtype": "$TEST_TRANSPORT", 00:08:02.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.679 "adrfam": "ipv4", 00:08:02.679 "trsvcid": "$NVMF_PORT", 00:08:02.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.679 "hdgst": ${hdgst:-false}, 00:08:02.679 "ddgst": ${ddgst:-false} 00:08:02.679 }, 00:08:02.679 "method": "bdev_nvme_attach_controller" 00:08:02.679 } 00:08:02.679 EOF 00:08:02.679 )") 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:02.679 { 00:08:02.679 "params": { 00:08:02.679 "name": "Nvme$subsystem", 00:08:02.679 "trtype": "$TEST_TRANSPORT", 00:08:02.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.679 "adrfam": "ipv4", 00:08:02.679 "trsvcid": "$NVMF_PORT", 00:08:02.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.679 "hdgst": ${hdgst:-false}, 00:08:02.679 "ddgst": ${ddgst:-false} 00:08:02.679 }, 00:08:02.679 "method": "bdev_nvme_attach_controller" 00:08:02.679 } 00:08:02.679 EOF 00:08:02.679 )") 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1099024 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:02.679 "params": { 00:08:02.679 "name": "Nvme1", 00:08:02.679 "trtype": "tcp", 00:08:02.679 "traddr": "10.0.0.2", 00:08:02.679 "adrfam": "ipv4", 00:08:02.679 "trsvcid": "4420", 00:08:02.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:02.679 "hdgst": false, 00:08:02.679 "ddgst": false 00:08:02.679 }, 00:08:02.679 "method": "bdev_nvme_attach_controller" 00:08:02.679 }' 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:02.679 "params": { 00:08:02.679 "name": "Nvme1", 00:08:02.679 "trtype": "tcp", 00:08:02.679 "traddr": "10.0.0.2", 00:08:02.679 "adrfam": "ipv4", 00:08:02.679 "trsvcid": "4420", 00:08:02.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:02.679 "hdgst": false, 00:08:02.679 "ddgst": false 00:08:02.679 }, 00:08:02.679 "method": "bdev_nvme_attach_controller" 00:08:02.679 }' 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:02.679 "params": { 00:08:02.679 "name": "Nvme1", 00:08:02.679 "trtype": "tcp", 00:08:02.679 "traddr": "10.0.0.2", 00:08:02.679 "adrfam": "ipv4", 00:08:02.679 "trsvcid": "4420", 00:08:02.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:02.679 "hdgst": false, 00:08:02.679 "ddgst": false 00:08:02.679 }, 00:08:02.679 "method": "bdev_nvme_attach_controller" 00:08:02.679 }' 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:02.679 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:02.679 "params": { 00:08:02.679 "name": "Nvme1", 00:08:02.679 "trtype": "tcp", 00:08:02.679 "traddr": "10.0.0.2", 00:08:02.679 "adrfam": "ipv4", 00:08:02.679 "trsvcid": "4420", 00:08:02.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:02.679 "hdgst": false, 00:08:02.679 "ddgst": false 00:08:02.679 }, 00:08:02.679 "method": "bdev_nvme_attach_controller" 00:08:02.679 }' 00:08:02.679 [2024-11-20 07:08:37.328855] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:08:02.679 [2024-11-20 07:08:37.328922] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:02.679 [2024-11-20 07:08:37.330255] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:08:02.679 [2024-11-20 07:08:37.330304] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:02.679 [2024-11-20 07:08:37.330704] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:08:02.679 [2024-11-20 07:08:37.330749] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:02.679 [2024-11-20 07:08:37.339703] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:08:02.679 [2024-11-20 07:08:37.339748] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:02.940 [2024-11-20 07:08:37.498128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.940 [2024-11-20 07:08:37.528300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:02.940 [2024-11-20 07:08:37.552851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.940 [2024-11-20 07:08:37.581586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:02.940 [2024-11-20 07:08:37.615125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.940 [2024-11-20 07:08:37.644804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:02.940 [2024-11-20 07:08:37.668633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.940 [2024-11-20 07:08:37.696281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:03.201 Running I/O for 1 seconds... 00:08:03.201 Running I/O for 1 seconds... 00:08:03.201 Running I/O for 1 seconds... 00:08:03.201 Running I/O for 1 seconds... 00:08:04.146 18366.00 IOPS, 71.74 MiB/s 00:08:04.146 Latency(us) 00:08:04.146 [2024-11-20T06:08:38.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.146 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:04.146 Nvme1n1 : 1.01 18405.63 71.90 0.00 0.00 6937.61 3304.11 15073.28 00:08:04.146 [2024-11-20T06:08:38.913Z] =================================================================================================================== 00:08:04.146 [2024-11-20T06:08:38.913Z] Total : 18405.63 71.90 0.00 0.00 6937.61 3304.11 15073.28 00:08:04.146 12989.00 IOPS, 50.74 MiB/s 00:08:04.146 Latency(us) 00:08:04.146 [2024-11-20T06:08:38.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.146 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:04.146 Nvme1n1 : 1.01 13067.16 51.04 0.00 0.00 9766.17 4450.99 18677.76 00:08:04.146 [2024-11-20T06:08:38.913Z] =================================================================================================================== 00:08:04.146 [2024-11-20T06:08:38.913Z] Total : 13067.16 51.04 0.00 0.00 9766.17 4450.99 18677.76 00:08:04.146 183928.00 IOPS, 718.47 MiB/s 00:08:04.146 Latency(us) 00:08:04.146 [2024-11-20T06:08:38.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.146 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:04.146 Nvme1n1 : 1.00 183551.28 717.00 0.00 0.00 693.47 310.61 2020.69 00:08:04.146 [2024-11-20T06:08:38.913Z] =================================================================================================================== 00:08:04.146 [2024-11-20T06:08:38.913Z] Total : 183551.28 717.00 0.00 0.00 693.47 310.61 2020.69 00:08:04.407 11498.00 IOPS, 44.91 MiB/s 00:08:04.407 Latency(us) 00:08:04.407 [2024-11-20T06:08:39.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.407 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:04.407 Nvme1n1 : 1.01 11575.79 45.22 0.00 0.00 11021.97 4587.52 20425.39 00:08:04.407 [2024-11-20T06:08:39.174Z] =================================================================================================================== 00:08:04.407 [2024-11-20T06:08:39.174Z] Total : 11575.79 45.22 0.00 0.00 11021.97 4587.52 20425.39 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1099026 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1099028 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1099031 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.407 rmmod nvme_tcp 00:08:04.407 rmmod nvme_fabrics 00:08:04.407 rmmod nvme_keyring 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1098676 ']' 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1098676 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 1098676 ']' 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 1098676 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1098676 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1098676' 00:08:04.407 killing process with pid 1098676 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 1098676 00:08:04.407 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 1098676 00:08:04.668 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.668 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.668 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.668 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:04.668 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.668 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:04.668 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.668 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.668 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.668 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.668 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.668 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.580 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:06.580 00:08:06.580 real 0m13.903s 00:08:06.580 user 0m19.020s 00:08:06.580 sys 0m7.915s 00:08:06.581 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.581 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.581 ************************************ 00:08:06.581 END TEST nvmf_bdev_io_wait 00:08:06.581 ************************************ 00:08:06.841 07:08:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:06.841 07:08:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:06.841 07:08:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.841 07:08:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.841 ************************************ 00:08:06.841 START TEST nvmf_queue_depth 00:08:06.841 ************************************ 00:08:06.841 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:06.841 * Looking for test storage... 00:08:06.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.841 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:06.841 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:06.841 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:06.841 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:06.841 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.841 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.842 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.842 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.842 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.842 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.842 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.842 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.104 --rc genhtml_branch_coverage=1 00:08:07.104 --rc genhtml_function_coverage=1 00:08:07.104 --rc genhtml_legend=1 00:08:07.104 --rc geninfo_all_blocks=1 00:08:07.104 --rc geninfo_unexecuted_blocks=1 00:08:07.104 00:08:07.104 ' 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.104 --rc genhtml_branch_coverage=1 00:08:07.104 --rc genhtml_function_coverage=1 00:08:07.104 --rc genhtml_legend=1 00:08:07.104 --rc geninfo_all_blocks=1 00:08:07.104 --rc geninfo_unexecuted_blocks=1 00:08:07.104 00:08:07.104 ' 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.104 --rc genhtml_branch_coverage=1 00:08:07.104 --rc genhtml_function_coverage=1 00:08:07.104 --rc genhtml_legend=1 00:08:07.104 --rc geninfo_all_blocks=1 00:08:07.104 --rc geninfo_unexecuted_blocks=1 00:08:07.104 00:08:07.104 ' 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.104 --rc genhtml_branch_coverage=1 00:08:07.104 --rc genhtml_function_coverage=1 00:08:07.104 --rc genhtml_legend=1 00:08:07.104 --rc geninfo_all_blocks=1 00:08:07.104 --rc geninfo_unexecuted_blocks=1 00:08:07.104 00:08:07.104 ' 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.104 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.105 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:15.244 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:15.244 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:15.244 Found net devices under 0000:31:00.0: cvl_0_0 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.244 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:15.245 Found net devices under 0000:31:00.1: cvl_0_1 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.245 07:08:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:08:15.505 00:08:15.505 --- 10.0.0.2 ping statistics --- 00:08:15.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.505 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:08:15.505 00:08:15.505 --- 10.0.0.1 ping statistics --- 00:08:15.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.505 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1104093 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1104093 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1104093 ']' 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:15.505 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.505 [2024-11-20 07:08:50.222149] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:08:15.505 [2024-11-20 07:08:50.222199] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.766 [2024-11-20 07:08:50.330132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.766 [2024-11-20 07:08:50.379075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.766 [2024-11-20 07:08:50.379127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.766 [2024-11-20 07:08:50.379136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.766 [2024-11-20 07:08:50.379143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.766 [2024-11-20 07:08:50.379149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.766 [2024-11-20 07:08:50.379917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.337 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.337 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:16.338 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.338 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.338 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.338 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.338 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.338 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.338 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.338 [2024-11-20 07:08:51.079547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.338 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.338 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:16.338 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.338 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.598 Malloc0 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.598 [2024-11-20 07:08:51.124761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1104436 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:16.598 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:16.599 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1104436 /var/tmp/bdevperf.sock 00:08:16.599 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1104436 ']' 00:08:16.599 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:16.599 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:16.599 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:16.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:16.599 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:16.599 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.599 [2024-11-20 07:08:51.189502] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:08:16.599 [2024-11-20 07:08:51.189565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1104436 ] 00:08:16.599 [2024-11-20 07:08:51.272620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.599 [2024-11-20 07:08:51.314488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.541 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:17.541 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:17.541 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:17.541 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.541 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.541 NVMe0n1 00:08:17.541 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.541 07:08:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:17.541 Running I/O for 10 seconds... 00:08:19.864 10240.00 IOPS, 40.00 MiB/s [2024-11-20T06:08:55.572Z] 10850.50 IOPS, 42.38 MiB/s [2024-11-20T06:08:56.514Z] 11261.67 IOPS, 43.99 MiB/s [2024-11-20T06:08:57.455Z] 11286.50 IOPS, 44.09 MiB/s [2024-11-20T06:08:58.411Z] 11445.00 IOPS, 44.71 MiB/s [2024-11-20T06:08:59.398Z] 11463.33 IOPS, 44.78 MiB/s [2024-11-20T06:09:00.340Z] 11525.71 IOPS, 45.02 MiB/s [2024-11-20T06:09:01.725Z] 11523.38 IOPS, 45.01 MiB/s [2024-11-20T06:09:02.668Z] 11595.56 IOPS, 45.30 MiB/s [2024-11-20T06:09:02.668Z] 11598.70 IOPS, 45.31 MiB/s 00:08:27.901 Latency(us) 00:08:27.901 [2024-11-20T06:09:02.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.901 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:27.901 Verification LBA range: start 0x0 length 0x4000 00:08:27.901 NVMe0n1 : 10.04 11633.34 45.44 0.00 0.00 87731.10 3153.92 66409.81 00:08:27.901 [2024-11-20T06:09:02.668Z] =================================================================================================================== 00:08:27.901 [2024-11-20T06:09:02.668Z] Total : 11633.34 45.44 0.00 0.00 87731.10 3153.92 66409.81 00:08:27.901 { 00:08:27.901 "results": [ 00:08:27.901 { 00:08:27.901 "job": "NVMe0n1", 00:08:27.901 "core_mask": "0x1", 00:08:27.901 "workload": "verify", 00:08:27.901 "status": "finished", 00:08:27.901 "verify_range": { 00:08:27.901 "start": 0, 00:08:27.901 "length": 16384 00:08:27.901 }, 00:08:27.901 "queue_depth": 1024, 00:08:27.901 "io_size": 4096, 00:08:27.901 "runtime": 10.039163, 00:08:27.901 "iops": 11633.340349190465, 00:08:27.901 "mibps": 45.44273573902525, 00:08:27.901 "io_failed": 0, 00:08:27.901 "io_timeout": 0, 00:08:27.901 "avg_latency_us": 87731.09821872493, 00:08:27.901 "min_latency_us": 3153.92, 00:08:27.901 "max_latency_us": 66409.81333333334 00:08:27.901 } 00:08:27.901 ], 00:08:27.901 "core_count": 1 00:08:27.901 } 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1104436 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1104436 ']' 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1104436 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1104436 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1104436' 00:08:27.901 killing process with pid 1104436 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1104436 00:08:27.901 Received shutdown signal, test time was about 10.000000 seconds 00:08:27.901 00:08:27.901 Latency(us) 00:08:27.901 [2024-11-20T06:09:02.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.901 [2024-11-20T06:09:02.668Z] =================================================================================================================== 00:08:27.901 [2024-11-20T06:09:02.668Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1104436 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.901 rmmod nvme_tcp 00:08:27.901 rmmod nvme_fabrics 00:08:27.901 rmmod nvme_keyring 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1104093 ']' 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1104093 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1104093 ']' 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1104093 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.901 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1104093 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1104093' 00:08:28.162 killing process with pid 1104093 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1104093 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1104093 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.162 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.707 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.707 00:08:30.707 real 0m23.448s 00:08:30.707 user 0m26.032s 00:08:30.707 sys 0m7.726s 00:08:30.707 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.707 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.707 ************************************ 00:08:30.707 END TEST nvmf_queue_depth 00:08:30.707 ************************************ 00:08:30.707 07:09:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:30.707 07:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:30.707 07:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:30.707 07:09:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.707 ************************************ 00:08:30.707 START TEST nvmf_target_multipath 00:08:30.707 ************************************ 00:08:30.707 07:09:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:30.707 * Looking for test storage... 00:08:30.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:30.707 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:30.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.708 --rc genhtml_branch_coverage=1 00:08:30.708 --rc genhtml_function_coverage=1 00:08:30.708 --rc genhtml_legend=1 00:08:30.708 --rc geninfo_all_blocks=1 00:08:30.708 --rc geninfo_unexecuted_blocks=1 00:08:30.708 00:08:30.708 ' 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:30.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.708 --rc genhtml_branch_coverage=1 00:08:30.708 --rc genhtml_function_coverage=1 00:08:30.708 --rc genhtml_legend=1 00:08:30.708 --rc geninfo_all_blocks=1 00:08:30.708 --rc geninfo_unexecuted_blocks=1 00:08:30.708 00:08:30.708 ' 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:30.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.708 --rc genhtml_branch_coverage=1 00:08:30.708 --rc genhtml_function_coverage=1 00:08:30.708 --rc genhtml_legend=1 00:08:30.708 --rc geninfo_all_blocks=1 00:08:30.708 --rc geninfo_unexecuted_blocks=1 00:08:30.708 00:08:30.708 ' 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:30.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.708 --rc genhtml_branch_coverage=1 00:08:30.708 --rc genhtml_function_coverage=1 00:08:30.708 --rc genhtml_legend=1 00:08:30.708 --rc geninfo_all_blocks=1 00:08:30.708 --rc geninfo_unexecuted_blocks=1 00:08:30.708 00:08:30.708 ' 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:30.708 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.709 07:09:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:38.850 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:38.850 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.850 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:38.851 Found net devices under 0000:31:00.0: cvl_0_0 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:38.851 Found net devices under 0000:31:00.1: cvl_0_1 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.851 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.112 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.112 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.112 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:39.112 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.112 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.112 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.112 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.112 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:08:39.112 00:08:39.112 --- 10.0.0.2 ping statistics --- 00:08:39.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.112 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:08:39.112 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:08:39.113 00:08:39.113 --- 10.0.0.1 ping statistics --- 00:08:39.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.113 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:39.113 only one NIC for nvmf test 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.113 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.113 rmmod nvme_tcp 00:08:39.113 rmmod nvme_fabrics 00:08:39.374 rmmod nvme_keyring 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.374 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.289 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:41.289 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:41.289 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:41.289 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.289 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:41.289 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.289 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:41.289 00:08:41.289 real 0m11.091s 00:08:41.289 user 0m2.458s 00:08:41.289 sys 0m6.564s 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.289 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:41.289 ************************************ 00:08:41.289 END TEST nvmf_target_multipath 00:08:41.289 ************************************ 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.550 ************************************ 00:08:41.550 START TEST nvmf_zcopy 00:08:41.550 ************************************ 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:41.550 * Looking for test storage... 00:08:41.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.550 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:41.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.812 --rc genhtml_branch_coverage=1 00:08:41.812 --rc genhtml_function_coverage=1 00:08:41.812 --rc genhtml_legend=1 00:08:41.812 --rc geninfo_all_blocks=1 00:08:41.812 --rc geninfo_unexecuted_blocks=1 00:08:41.812 00:08:41.812 ' 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:41.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.812 --rc genhtml_branch_coverage=1 00:08:41.812 --rc genhtml_function_coverage=1 00:08:41.812 --rc genhtml_legend=1 00:08:41.812 --rc geninfo_all_blocks=1 00:08:41.812 --rc geninfo_unexecuted_blocks=1 00:08:41.812 00:08:41.812 ' 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:41.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.812 --rc genhtml_branch_coverage=1 00:08:41.812 --rc genhtml_function_coverage=1 00:08:41.812 --rc genhtml_legend=1 00:08:41.812 --rc geninfo_all_blocks=1 00:08:41.812 --rc geninfo_unexecuted_blocks=1 00:08:41.812 00:08:41.812 ' 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:41.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.812 --rc genhtml_branch_coverage=1 00:08:41.812 --rc genhtml_function_coverage=1 00:08:41.812 --rc genhtml_legend=1 00:08:41.812 --rc geninfo_all_blocks=1 00:08:41.812 --rc geninfo_unexecuted_blocks=1 00:08:41.812 00:08:41.812 ' 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.812 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.957 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.957 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.957 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.957 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:49.958 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:49.958 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:49.958 Found net devices under 0000:31:00.0: cvl_0_0 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:49.958 Found net devices under 0000:31:00.1: cvl_0_1 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:08:49.958 00:08:49.958 --- 10.0.0.2 ping statistics --- 00:08:49.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.958 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:08:49.958 00:08:49.958 --- 10.0.0.1 ping statistics --- 00:08:49.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.958 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:49.958 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.959 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.959 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.959 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.959 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.959 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.959 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:50.219 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:50.219 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:50.219 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:50.219 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.219 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1116736 00:08:50.219 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1116736 00:08:50.219 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:50.219 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 1116736 ']' 00:08:50.219 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.219 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:50.219 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.219 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:50.219 07:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.219 [2024-11-20 07:09:24.800692] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:08:50.219 [2024-11-20 07:09:24.800742] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.219 [2024-11-20 07:09:24.893029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.219 [2024-11-20 07:09:24.927528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.219 [2024-11-20 07:09:24.927562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.219 [2024-11-20 07:09:24.927571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.219 [2024-11-20 07:09:24.927577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.219 [2024-11-20 07:09:24.927583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.219 [2024-11-20 07:09:24.928173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.480 [2024-11-20 07:09:25.056126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.480 [2024-11-20 07:09:25.072343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.480 malloc0 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:50.480 { 00:08:50.480 "params": { 00:08:50.480 "name": "Nvme$subsystem", 00:08:50.480 "trtype": "$TEST_TRANSPORT", 00:08:50.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.480 "adrfam": "ipv4", 00:08:50.480 "trsvcid": "$NVMF_PORT", 00:08:50.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.480 "hdgst": ${hdgst:-false}, 00:08:50.480 "ddgst": ${ddgst:-false} 00:08:50.480 }, 00:08:50.480 "method": "bdev_nvme_attach_controller" 00:08:50.480 } 00:08:50.480 EOF 00:08:50.480 )") 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:50.480 07:09:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:50.480 "params": { 00:08:50.480 "name": "Nvme1", 00:08:50.480 "trtype": "tcp", 00:08:50.481 "traddr": "10.0.0.2", 00:08:50.481 "adrfam": "ipv4", 00:08:50.481 "trsvcid": "4420", 00:08:50.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.481 "hdgst": false, 00:08:50.481 "ddgst": false 00:08:50.481 }, 00:08:50.481 "method": "bdev_nvme_attach_controller" 00:08:50.481 }' 00:08:50.481 [2024-11-20 07:09:25.158492] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:08:50.481 [2024-11-20 07:09:25.158550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116802 ] 00:08:50.481 [2024-11-20 07:09:25.240190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.741 [2024-11-20 07:09:25.281830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.001 Running I/O for 10 seconds... 00:08:52.886 6658.00 IOPS, 52.02 MiB/s [2024-11-20T06:09:28.598Z] 6727.00 IOPS, 52.55 MiB/s [2024-11-20T06:09:29.542Z] 6959.33 IOPS, 54.37 MiB/s [2024-11-20T06:09:30.927Z] 7662.75 IOPS, 59.87 MiB/s [2024-11-20T06:09:31.869Z] 8082.80 IOPS, 63.15 MiB/s [2024-11-20T06:09:32.810Z] 8364.83 IOPS, 65.35 MiB/s [2024-11-20T06:09:33.753Z] 8565.86 IOPS, 66.92 MiB/s [2024-11-20T06:09:34.696Z] 8719.50 IOPS, 68.12 MiB/s [2024-11-20T06:09:35.639Z] 8836.44 IOPS, 69.03 MiB/s [2024-11-20T06:09:35.639Z] 8930.40 IOPS, 69.77 MiB/s 00:09:00.872 Latency(us) 00:09:00.872 [2024-11-20T06:09:35.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.872 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:00.872 Verification LBA range: start 0x0 length 0x1000 00:09:00.872 Nvme1n1 : 10.01 8932.54 69.79 0.00 0.00 14277.35 2075.31 27962.03 00:09:00.872 [2024-11-20T06:09:35.639Z] =================================================================================================================== 00:09:00.872 [2024-11-20T06:09:35.639Z] Total : 8932.54 69.79 0.00 0.00 14277.35 2075.31 27962.03 00:09:01.134 07:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1118976 00:09:01.134 07:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:01.134 07:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.134 07:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:01.134 07:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:01.134 07:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:01.134 07:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.134 07:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.134 07:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.134 { 00:09:01.134 "params": { 00:09:01.134 "name": "Nvme$subsystem", 00:09:01.134 "trtype": "$TEST_TRANSPORT", 00:09:01.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.134 "adrfam": "ipv4", 00:09:01.134 "trsvcid": "$NVMF_PORT", 00:09:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.134 "hdgst": ${hdgst:-false}, 00:09:01.134 "ddgst": ${ddgst:-false} 00:09:01.134 }, 00:09:01.134 "method": "bdev_nvme_attach_controller" 00:09:01.134 } 00:09:01.134 EOF 00:09:01.134 )") 00:09:01.134 [2024-11-20 07:09:35.667106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.667139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 07:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:01.134 07:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:01.134 07:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:01.134 07:09:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.134 "params": { 00:09:01.134 "name": "Nvme1", 00:09:01.134 "trtype": "tcp", 00:09:01.134 "traddr": "10.0.0.2", 00:09:01.134 "adrfam": "ipv4", 00:09:01.134 "trsvcid": "4420", 00:09:01.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.134 "hdgst": false, 00:09:01.134 "ddgst": false 00:09:01.134 }, 00:09:01.134 "method": "bdev_nvme_attach_controller" 00:09:01.134 }' 00:09:01.134 [2024-11-20 07:09:35.679104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.679114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.691132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.691140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.703162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.703170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.710983] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:09:01.134 [2024-11-20 07:09:35.711032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1118976 ] 00:09:01.134 [2024-11-20 07:09:35.715193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.715202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.727223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.727230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.739252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.739260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.751282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.751291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.763312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.763320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.775343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.775350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.787355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.134 [2024-11-20 07:09:35.787374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.787381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.799405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.799415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.811436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.811446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.823250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.134 [2024-11-20 07:09:35.823469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.823479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.835499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.835508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.847532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.847544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.859560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.859572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.871590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.871600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.883620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.883628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.134 [2024-11-20 07:09:35.895661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.134 [2024-11-20 07:09:35.895675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:35.907688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:35.907701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:35.919716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:35.919732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:35.931744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:35.931752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:35.943776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:35.943783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:35.955806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:35.955812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:35.967839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:35.967849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:35.979874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:35.979883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:35.991904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:35.991911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:36.003933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:36.003940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:36.015965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:36.015974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:36.027994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:36.028000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:36.040023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:36.040030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:36.052056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:36.052062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:36.064089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:36.064098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.395 [2024-11-20 07:09:36.076119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.395 [2024-11-20 07:09:36.076126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.396 [2024-11-20 07:09:36.088150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.396 [2024-11-20 07:09:36.088157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.396 [2024-11-20 07:09:36.100182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.396 [2024-11-20 07:09:36.100190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.396 [2024-11-20 07:09:36.112213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.396 [2024-11-20 07:09:36.112220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.396 [2024-11-20 07:09:36.155320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.396 [2024-11-20 07:09:36.155334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.164357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.164366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 Running I/O for 5 seconds... 00:09:01.656 [2024-11-20 07:09:36.180028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.180044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.193654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.193670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.206832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.206849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.220611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.220626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.233101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.233117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.246029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.246044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.259387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.259402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.272706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.272720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.285223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.285238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.298345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.298359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.310705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.310720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.323457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.323473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.337045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.337061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.349849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.349868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.362433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.362448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.375824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.375840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.389779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.389794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.402549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.402564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.656 [2024-11-20 07:09:36.414782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.656 [2024-11-20 07:09:36.414797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.428394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.428409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.441798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.441812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.454370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.454385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.467633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.467648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.480576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.480591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.492829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.492844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.505537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.505552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.518801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.518816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.532232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.532247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.545630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.545644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.559070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.559084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.572392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.572407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.584951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.584966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.598366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.598380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.611833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.611848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.625016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.625030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.638386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.638401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.651233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.651247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.664446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.664461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.918 [2024-11-20 07:09:36.677751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.918 [2024-11-20 07:09:36.677766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.690450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.690466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.703894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.703909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.717069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.717084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.730193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.730208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.743694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.743710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.756690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.756705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.769811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.769826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.782881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.782896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.795687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.795701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.807974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.807989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.820756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.820772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.834386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.834401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.847873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.847888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.861406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.861421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.875416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.875431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.888502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.888517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.902306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.902321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.915096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.915110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.928296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.928312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.180 [2024-11-20 07:09:36.941207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.180 [2024-11-20 07:09:36.941222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:36.954668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:36.954683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:36.967166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:36.967181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:36.979869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:36.979884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:36.992739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:36.992753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.005686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.005701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.019057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.019072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.032642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.032658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.045094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.045108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.057691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.057706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.070792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.070807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.084582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.084596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.098008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.098023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.110817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.110833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.123237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.123251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.135969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.135984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.148775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.148790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.162072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.162092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.174235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.174250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 19185.00 IOPS, 149.88 MiB/s [2024-11-20T06:09:37.209Z] [2024-11-20 07:09:37.187223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.187238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.442 [2024-11-20 07:09:37.200366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.442 [2024-11-20 07:09:37.200381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.213194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.213209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.226415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.226430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.239854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.239873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.253118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.253133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.266430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.266444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.279488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.279503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.292376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.292391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.305010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.305025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.318114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.318129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.331928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.331942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.344356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.344371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.357819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.357834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.370263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.370277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.383241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.383256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.396712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.396727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.410160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.410183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.423867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.423883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.436523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.436538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.449196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.449212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.703 [2024-11-20 07:09:37.462346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.703 [2024-11-20 07:09:37.462360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.475734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.475749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.488896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.488911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.501580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.501595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.514555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.514569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.527705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.527720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.540973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.540988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.554413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.554427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.567791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.567806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.581607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.581621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.594252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.594267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.606982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.606998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.620695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.620710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.633772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.633786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.646457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.646472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.659846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.659871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.672791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.672805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.685781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.685796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.698268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.698282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.710924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.710939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.964 [2024-11-20 07:09:37.723630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.964 [2024-11-20 07:09:37.723645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.736682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.736696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.749377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.749392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.761546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.761560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.774954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.774969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.788758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.788772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.801764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.801779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.815300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.815315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.828636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.828650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.842166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.842180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.855002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.855017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.868216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.868231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.880642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.880656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.893477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.893491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.906743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.906758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.919442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.919458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.932546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.932560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.945858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.945876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.958808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.958823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.972285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.972299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.224 [2024-11-20 07:09:37.984887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.224 [2024-11-20 07:09:37.984902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:37.998054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:37.998069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.010603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.010618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.023044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.023058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.035440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.035455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.048736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.048751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.062122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.062136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.074952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.074966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.088567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.088581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.102423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.102437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.115936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.115951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.129522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.129537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.141974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.141989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.154636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.154651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.167130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.167145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 19254.50 IOPS, 150.43 MiB/s [2024-11-20T06:09:38.253Z] [2024-11-20 07:09:38.179992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.180006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.192493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.192507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.206130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.206144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.219999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.220014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.232976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.232990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.486 [2024-11-20 07:09:38.246228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.486 [2024-11-20 07:09:38.246242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.259752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.259767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.273206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.273220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.286248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.286263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.298581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.298596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.311311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.311326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.324274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.324289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.337952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.337966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.350550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.350564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.363263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.363277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.376194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.376208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.389435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.389450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.402745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.402761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.415684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.415699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.429262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.429277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.442025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.442040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.455590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.455605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.468593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.468608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.482155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.482170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.495784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.495798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.747 [2024-11-20 07:09:38.509475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.747 [2024-11-20 07:09:38.509490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.008 [2024-11-20 07:09:38.521875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.008 [2024-11-20 07:09:38.521889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.008 [2024-11-20 07:09:38.534743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.008 [2024-11-20 07:09:38.534757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.008 [2024-11-20 07:09:38.547427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.008 [2024-11-20 07:09:38.547442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.560757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.560772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.574153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.574167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.587513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.587528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.600677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.600692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.613796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.613811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.626769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.626784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.640187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.640205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.653108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.653123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.666166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.666182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.679570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.679585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.692764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.692779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.706036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.706051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.718943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.718958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.732090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.732104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.745064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.745079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.757964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.757979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.009 [2024-11-20 07:09:38.770968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.009 [2024-11-20 07:09:38.770983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.784099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.784114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.797751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.797766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.810951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.810966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.823800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.823815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.836672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.836687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.849726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.849741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.862686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.862700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.875872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.875887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.888584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.888602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.901785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.901801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.915290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.915305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.928453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.928468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.941145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.941160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.954221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.954236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.967565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.967580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.980636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.980650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:38.993147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:38.993162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:39.006447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:39.006462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:39.019340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:39.019355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.269 [2024-11-20 07:09:39.032438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.269 [2024-11-20 07:09:39.032454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.045300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.045315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.058382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.058397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.071449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.071464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.084563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.084577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.098065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.098080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.110734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.110749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.123800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.123814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.136520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.136538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.149965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.149980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.163800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.163815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.176229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.176243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 19254.33 IOPS, 150.42 MiB/s [2024-11-20T06:09:39.298Z] [2024-11-20 07:09:39.189756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.189770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.203356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.203372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.216850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.216869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.229568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.229582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.242687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.242702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.255272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.255286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.268842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.268857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.282138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.282153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.531 [2024-11-20 07:09:39.295693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.531 [2024-11-20 07:09:39.295708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.791 [2024-11-20 07:09:39.309007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.791 [2024-11-20 07:09:39.309022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.791 [2024-11-20 07:09:39.321825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.791 [2024-11-20 07:09:39.321840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.791 [2024-11-20 07:09:39.335361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.791 [2024-11-20 07:09:39.335376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.791 [2024-11-20 07:09:39.348871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.791 [2024-11-20 07:09:39.348886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.791 [2024-11-20 07:09:39.362217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.791 [2024-11-20 07:09:39.362232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.791 [2024-11-20 07:09:39.375663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.791 [2024-11-20 07:09:39.375677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.791 [2024-11-20 07:09:39.389288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.791 [2024-11-20 07:09:39.389302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.791 [2024-11-20 07:09:39.402923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.791 [2024-11-20 07:09:39.402937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.791 [2024-11-20 07:09:39.415966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.791 [2024-11-20 07:09:39.415980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.791 [2024-11-20 07:09:39.428689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.791 [2024-11-20 07:09:39.428703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.791 [2024-11-20 07:09:39.441859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.791 [2024-11-20 07:09:39.441877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.792 [2024-11-20 07:09:39.455118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.792 [2024-11-20 07:09:39.455132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.792 [2024-11-20 07:09:39.467949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.792 [2024-11-20 07:09:39.467963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.792 [2024-11-20 07:09:39.480289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.792 [2024-11-20 07:09:39.480304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.792 [2024-11-20 07:09:39.492571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.792 [2024-11-20 07:09:39.492585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.792 [2024-11-20 07:09:39.505603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.792 [2024-11-20 07:09:39.505617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.792 [2024-11-20 07:09:39.518037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.792 [2024-11-20 07:09:39.518051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.792 [2024-11-20 07:09:39.530379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.792 [2024-11-20 07:09:39.530393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.792 [2024-11-20 07:09:39.543201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.792 [2024-11-20 07:09:39.543215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.792 [2024-11-20 07:09:39.556736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.792 [2024-11-20 07:09:39.556750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.570107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.570122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.582771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.582786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.595637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.595651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.608849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.608867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.622459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.622474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.635695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.635709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.649061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.649076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.662029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.662044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.675367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.675382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.689028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.689042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.701974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.701988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.714891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.714905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.728255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.728269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.741771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.741786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.754424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.754439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.767753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.767768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.781467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.781482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.793800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.793815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.052 [2024-11-20 07:09:39.807320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.052 [2024-11-20 07:09:39.807335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.819960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.819975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.832958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.832972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.846393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.846407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.859136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.859150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.872618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.872633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.886263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.886278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.899560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.899574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.912967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.912981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.926384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.926399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.939463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.939477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.952899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.952913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.965698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.965712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.979345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.979359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:39.992643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:39.992657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:40.006404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:40.006419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:40.019974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:40.019989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:40.032624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:40.032639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:40.046318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:40.046333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:40.059621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:40.059636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.313 [2024-11-20 07:09:40.073116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.313 [2024-11-20 07:09:40.073131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.085415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.085430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.098109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.098123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.111582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.111597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.124049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.124068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.136769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.136783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.150321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.150335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.163018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.163033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.175291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.175305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 19275.00 IOPS, 150.59 MiB/s [2024-11-20T06:09:40.341Z] [2024-11-20 07:09:40.188239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.188254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.201024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.201038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.214431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.214446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.227931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.227946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.241024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.241038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.253877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.253891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.267096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.267110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.279857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.279876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.293355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.293369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.306449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.306463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.319522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.319536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.574 [2024-11-20 07:09:40.333119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.574 [2024-11-20 07:09:40.333133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.345830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.345844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.358520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.358534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.371089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.371108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.384657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.384671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.397477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.397492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.410895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.410911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.423559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.423574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.436801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.436816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.449676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.449691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.463081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.463095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.475588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.475603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.488632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.488647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.501558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.501573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.514781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.514796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.528109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.528124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.541093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.541109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.553609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.553624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.565945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.565959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.578566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.578581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.835 [2024-11-20 07:09:40.591994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.835 [2024-11-20 07:09:40.592009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.605156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.605171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.618531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.618550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.632006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.632020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.644785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.644799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.658359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.658374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.670854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.670873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.684338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.684353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.697421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.697436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.709829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.709844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.722797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.722812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.735125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.735139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.748602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.748617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.761976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.761990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.774801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.774815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.788119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.788134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.801562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.801577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.814326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.814341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.096 [2024-11-20 07:09:40.827783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.096 [2024-11-20 07:09:40.827798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.097 [2024-11-20 07:09:40.841300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.097 [2024-11-20 07:09:40.841315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.097 [2024-11-20 07:09:40.854716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.097 [2024-11-20 07:09:40.854731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.357 [2024-11-20 07:09:40.867938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.357 [2024-11-20 07:09:40.867953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.357 [2024-11-20 07:09:40.881333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.357 [2024-11-20 07:09:40.881348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.357 [2024-11-20 07:09:40.894827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.357 [2024-11-20 07:09:40.894842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.357 [2024-11-20 07:09:40.908383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:40.908398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:40.920895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:40.920909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:40.933916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:40.933930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:40.947349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:40.947364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:40.961012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:40.961026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:40.974213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:40.974228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:40.987149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:40.987164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:41.000879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:41.000894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:41.013350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:41.013365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:41.026172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:41.026186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:41.038984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:41.039000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:41.052144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:41.052159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:41.065597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:41.065612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:41.078065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:41.078080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:41.091298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:41.091313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:41.104626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:41.104641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.358 [2024-11-20 07:09:41.117876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.358 [2024-11-20 07:09:41.117891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.619 [2024-11-20 07:09:41.131170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.619 [2024-11-20 07:09:41.131185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.619 [2024-11-20 07:09:41.144432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.619 [2024-11-20 07:09:41.144446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.619 [2024-11-20 07:09:41.156992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.619 [2024-11-20 07:09:41.157007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.619 [2024-11-20 07:09:41.170432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.619 [2024-11-20 07:09:41.170447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.619 19283.40 IOPS, 150.65 MiB/s [2024-11-20T06:09:41.386Z] [2024-11-20 07:09:41.183584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.619 [2024-11-20 07:09:41.183598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.619 00:09:06.619 Latency(us) 00:09:06.619 [2024-11-20T06:09:41.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.619 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:06.619 Nvme1n1 : 5.01 19284.80 150.66 0.00 0.00 6630.82 2921.81 14636.37 00:09:06.619 [2024-11-20T06:09:41.386Z] =================================================================================================================== 00:09:06.619 [2024-11-20T06:09:41.386Z] Total : 19284.80 150.66 0.00 0.00 6630.82 2921.81 14636.37 00:09:06.619 [2024-11-20 07:09:41.193074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.619 [2024-11-20 07:09:41.193088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.619 [2024-11-20 07:09:41.205101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.619 [2024-11-20 07:09:41.205112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.619 [2024-11-20 07:09:41.217135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.619 [2024-11-20 07:09:41.217147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.619 [2024-11-20 07:09:41.229164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.619 [2024-11-20 07:09:41.229176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.619 [2024-11-20 07:09:41.241195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.619 [2024-11-20 07:09:41.241205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.619 [2024-11-20 07:09:41.253222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.620 [2024-11-20 07:09:41.253231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.620 [2024-11-20 07:09:41.265251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.620 [2024-11-20 07:09:41.265259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.620 [2024-11-20 07:09:41.277282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.620 [2024-11-20 07:09:41.277290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.620 [2024-11-20 07:09:41.289314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.620 [2024-11-20 07:09:41.289323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.620 [2024-11-20 07:09:41.301345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.620 [2024-11-20 07:09:41.301358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1118976) - No such process 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1118976 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.620 delay0 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.620 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:06.880 [2024-11-20 07:09:41.488052] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:15.022 Initializing NVMe Controllers 00:09:15.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:15.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:15.022 Initialization complete. Launching workers. 00:09:15.022 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 233, failed: 32263 00:09:15.023 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32377, failed to submit 119 00:09:15.023 success 32289, unsuccessful 88, failed 0 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:15.023 rmmod nvme_tcp 00:09:15.023 rmmod nvme_fabrics 00:09:15.023 rmmod nvme_keyring 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1116736 ']' 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1116736 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 1116736 ']' 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 1116736 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1116736 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1116736' 00:09:15.023 killing process with pid 1116736 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 1116736 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 1116736 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.023 07:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.410 07:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.410 00:09:16.410 real 0m34.770s 00:09:16.410 user 0m45.458s 00:09:16.410 sys 0m12.046s 00:09:16.410 07:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:16.410 07:09:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.410 ************************************ 00:09:16.410 END TEST nvmf_zcopy 00:09:16.410 ************************************ 00:09:16.410 07:09:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:16.410 07:09:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:16.410 07:09:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:16.410 07:09:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.410 ************************************ 00:09:16.410 START TEST nvmf_nmic 00:09:16.410 ************************************ 00:09:16.410 07:09:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:16.410 * Looking for test storage... 00:09:16.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:16.410 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:16.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.672 --rc genhtml_branch_coverage=1 00:09:16.672 --rc genhtml_function_coverage=1 00:09:16.672 --rc genhtml_legend=1 00:09:16.672 --rc geninfo_all_blocks=1 00:09:16.672 --rc geninfo_unexecuted_blocks=1 00:09:16.672 00:09:16.672 ' 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:16.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.672 --rc genhtml_branch_coverage=1 00:09:16.672 --rc genhtml_function_coverage=1 00:09:16.672 --rc genhtml_legend=1 00:09:16.672 --rc geninfo_all_blocks=1 00:09:16.672 --rc geninfo_unexecuted_blocks=1 00:09:16.672 00:09:16.672 ' 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:16.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.672 --rc genhtml_branch_coverage=1 00:09:16.672 --rc genhtml_function_coverage=1 00:09:16.672 --rc genhtml_legend=1 00:09:16.672 --rc geninfo_all_blocks=1 00:09:16.672 --rc geninfo_unexecuted_blocks=1 00:09:16.672 00:09:16.672 ' 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:16.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.672 --rc genhtml_branch_coverage=1 00:09:16.672 --rc genhtml_function_coverage=1 00:09:16.672 --rc genhtml_legend=1 00:09:16.672 --rc geninfo_all_blocks=1 00:09:16.672 --rc geninfo_unexecuted_blocks=1 00:09:16.672 00:09:16.672 ' 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.672 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.673 07:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:24.816 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:24.816 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:24.816 Found net devices under 0000:31:00.0: cvl_0_0 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:24.816 Found net devices under 0000:31:00.1: cvl_0_1 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.816 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:09:25.078 00:09:25.078 --- 10.0.0.2 ping statistics --- 00:09:25.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.078 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:09:25.078 00:09:25.078 --- 10.0.0.1 ping statistics --- 00:09:25.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.078 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1126200 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1126200 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 1126200 ']' 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:25.078 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.078 [2024-11-20 07:09:59.767221] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:09:25.078 [2024-11-20 07:09:59.767290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.340 [2024-11-20 07:09:59.858525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.340 [2024-11-20 07:09:59.901508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.340 [2024-11-20 07:09:59.901546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.340 [2024-11-20 07:09:59.901554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.340 [2024-11-20 07:09:59.901561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.340 [2024-11-20 07:09:59.901567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.340 [2024-11-20 07:09:59.903436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.340 [2024-11-20 07:09:59.903555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.340 [2024-11-20 07:09:59.903710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.340 [2024-11-20 07:09:59.903711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.911 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:25.911 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:25.911 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.912 [2024-11-20 07:10:00.615276] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.912 Malloc0 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.912 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.241 [2024-11-20 07:10:00.682129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:26.241 test case1: single bdev can't be used in multiple subsystems 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.241 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.241 [2024-11-20 07:10:00.718060] bdev.c:8318:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:26.241 [2024-11-20 07:10:00.718079] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:26.241 [2024-11-20 07:10:00.718086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.241 request: 00:09:26.241 { 00:09:26.241 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:26.241 "namespace": { 00:09:26.242 "bdev_name": "Malloc0", 00:09:26.242 "no_auto_visible": false 00:09:26.242 }, 00:09:26.242 "method": "nvmf_subsystem_add_ns", 00:09:26.242 "req_id": 1 00:09:26.242 } 00:09:26.242 Got JSON-RPC error response 00:09:26.242 response: 00:09:26.242 { 00:09:26.242 "code": -32602, 00:09:26.242 "message": "Invalid parameters" 00:09:26.242 } 00:09:26.242 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:26.242 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:26.242 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:26.242 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:26.242 Adding namespace failed - expected result. 00:09:26.242 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:26.242 test case2: host connect to nvmf target in multiple paths 00:09:26.242 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:26.242 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.242 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.242 [2024-11-20 07:10:00.730203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:26.242 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.242 07:10:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:27.718 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:29.104 07:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.104 07:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:29.104 07:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.104 07:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:29.104 07:10:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:31.650 07:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:31.650 07:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:31.650 07:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.650 07:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:31.650 07:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.650 07:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:31.650 07:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:31.650 [global] 00:09:31.650 thread=1 00:09:31.650 invalidate=1 00:09:31.650 rw=write 00:09:31.650 time_based=1 00:09:31.650 runtime=1 00:09:31.650 ioengine=libaio 00:09:31.650 direct=1 00:09:31.650 bs=4096 00:09:31.650 iodepth=1 00:09:31.650 norandommap=0 00:09:31.650 numjobs=1 00:09:31.650 00:09:31.650 verify_dump=1 00:09:31.650 verify_backlog=512 00:09:31.650 verify_state_save=0 00:09:31.650 do_verify=1 00:09:31.650 verify=crc32c-intel 00:09:31.650 [job0] 00:09:31.650 filename=/dev/nvme0n1 00:09:31.650 Could not set queue depth (nvme0n1) 00:09:31.650 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.650 fio-3.35 00:09:31.650 Starting 1 thread 00:09:32.596 00:09:32.596 job0: (groupid=0, jobs=1): err= 0: pid=1127717: Wed Nov 20 07:10:07 2024 00:09:32.596 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:32.596 slat (nsec): min=7773, max=62176, avg=27191.35, stdev=4244.85 00:09:32.596 clat (usec): min=759, max=1232, avg=984.34, stdev=67.50 00:09:32.596 lat (usec): min=786, max=1259, avg=1011.53, stdev=68.14 00:09:32.596 clat percentiles (usec): 00:09:32.596 | 1.00th=[ 799], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 947], 00:09:32.596 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 988], 60.00th=[ 1004], 00:09:32.596 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:09:32.596 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1237], 99.95th=[ 1237], 00:09:32.596 | 99.99th=[ 1237] 00:09:32.596 write: IOPS=754, BW=3017KiB/s (3089kB/s)(3020KiB/1001msec); 0 zone resets 00:09:32.596 slat (nsec): min=9142, max=70971, avg=29769.43, stdev=10462.14 00:09:32.596 clat (usec): min=244, max=1302, avg=596.94, stdev=99.11 00:09:32.596 lat (usec): min=281, max=1336, avg=626.71, stdev=104.68 00:09:32.596 clat percentiles (usec): 00:09:32.596 | 1.00th=[ 359], 5.00th=[ 420], 10.00th=[ 461], 20.00th=[ 515], 00:09:32.596 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 603], 60.00th=[ 627], 00:09:32.596 | 70.00th=[ 660], 80.00th=[ 685], 90.00th=[ 717], 95.00th=[ 742], 00:09:32.596 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 1303], 99.95th=[ 1303], 00:09:32.596 | 99.99th=[ 1303] 00:09:32.596 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:32.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:32.596 lat (usec) : 250=0.08%, 500=9.87%, 750=47.99%, 1000=25.41% 00:09:32.596 lat (msec) : 2=16.65% 00:09:32.596 cpu : usr=3.00%, sys=4.50%, ctx=1267, majf=0, minf=1 00:09:32.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.596 issued rwts: total=512,755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.596 00:09:32.596 Run status group 0 (all jobs): 00:09:32.596 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:32.596 WRITE: bw=3017KiB/s (3089kB/s), 3017KiB/s-3017KiB/s (3089kB/s-3089kB/s), io=3020KiB (3092kB), run=1001-1001msec 00:09:32.596 00:09:32.596 Disk stats (read/write): 00:09:32.596 nvme0n1: ios=562/595, merge=0/0, ticks=550/293, in_queue=843, util=94.09% 00:09:32.596 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.857 rmmod nvme_tcp 00:09:32.857 rmmod nvme_fabrics 00:09:32.857 rmmod nvme_keyring 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1126200 ']' 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1126200 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 1126200 ']' 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 1126200 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:32.857 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1126200 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1126200' 00:09:33.118 killing process with pid 1126200 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 1126200 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 1126200 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.118 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.665 07:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.665 00:09:35.665 real 0m18.879s 00:09:35.665 user 0m49.738s 00:09:35.665 sys 0m7.287s 00:09:35.665 07:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:35.665 07:10:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.665 ************************************ 00:09:35.665 END TEST nvmf_nmic 00:09:35.665 ************************************ 00:09:35.665 07:10:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:35.665 07:10:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:35.665 07:10:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:35.665 07:10:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.665 ************************************ 00:09:35.665 START TEST nvmf_fio_target 00:09:35.665 ************************************ 00:09:35.665 07:10:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:35.665 * Looking for test storage... 00:09:35.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.665 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:35.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.666 --rc genhtml_branch_coverage=1 00:09:35.666 --rc genhtml_function_coverage=1 00:09:35.666 --rc genhtml_legend=1 00:09:35.666 --rc geninfo_all_blocks=1 00:09:35.666 --rc geninfo_unexecuted_blocks=1 00:09:35.666 00:09:35.666 ' 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:35.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.666 --rc genhtml_branch_coverage=1 00:09:35.666 --rc genhtml_function_coverage=1 00:09:35.666 --rc genhtml_legend=1 00:09:35.666 --rc geninfo_all_blocks=1 00:09:35.666 --rc geninfo_unexecuted_blocks=1 00:09:35.666 00:09:35.666 ' 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:35.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.666 --rc genhtml_branch_coverage=1 00:09:35.666 --rc genhtml_function_coverage=1 00:09:35.666 --rc genhtml_legend=1 00:09:35.666 --rc geninfo_all_blocks=1 00:09:35.666 --rc geninfo_unexecuted_blocks=1 00:09:35.666 00:09:35.666 ' 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:35.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.666 --rc genhtml_branch_coverage=1 00:09:35.666 --rc genhtml_function_coverage=1 00:09:35.666 --rc genhtml_legend=1 00:09:35.666 --rc geninfo_all_blocks=1 00:09:35.666 --rc geninfo_unexecuted_blocks=1 00:09:35.666 00:09:35.666 ' 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.666 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.667 07:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.816 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.816 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:43.816 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:43.816 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:43.816 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:43.816 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:43.816 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:43.816 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:43.816 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:43.816 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:43.816 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:43.816 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:43.817 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:43.817 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:43.817 Found net devices under 0000:31:00.0: cvl_0_0 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:43.817 Found net devices under 0000:31:00.1: cvl_0_1 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:43.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:09:43.817 00:09:43.817 --- 10.0.0.2 ping statistics --- 00:09:43.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.817 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:09:43.817 00:09:43.817 --- 10.0.0.1 ping statistics --- 00:09:43.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.817 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:43.817 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.818 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.818 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1132754 00:09:43.818 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1132754 00:09:43.818 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 1132754 ']' 00:09:43.818 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.818 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:43.818 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.818 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:43.818 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.079 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:44.079 [2024-11-20 07:10:18.636005] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:09:44.079 [2024-11-20 07:10:18.636059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.079 [2024-11-20 07:10:18.723346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:44.079 [2024-11-20 07:10:18.762011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.079 [2024-11-20 07:10:18.762049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.079 [2024-11-20 07:10:18.762057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.079 [2024-11-20 07:10:18.762064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.079 [2024-11-20 07:10:18.762069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.079 [2024-11-20 07:10:18.763666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.079 [2024-11-20 07:10:18.763779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.079 [2024-11-20 07:10:18.763920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.079 [2024-11-20 07:10:18.763921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.023 07:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:45.023 07:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:45.023 07:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:45.023 07:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:45.023 07:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.023 07:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.023 07:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:45.023 [2024-11-20 07:10:19.631768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.023 07:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.284 07:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:45.284 07:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.546 07:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:45.546 07:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.546 07:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:45.546 07:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.807 07:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:45.807 07:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:46.070 07:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.070 07:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:46.070 07:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.331 07:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:46.331 07:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.592 07:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:46.592 07:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:46.854 07:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:46.854 07:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:46.854 07:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:47.115 07:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:47.115 07:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:47.376 07:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.376 [2024-11-20 07:10:22.106450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.376 07:10:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:47.637 07:10:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:47.899 07:10:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:49.815 07:10:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:49.815 07:10:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:49.815 07:10:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:49.815 07:10:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:49.815 07:10:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:49.815 07:10:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:51.727 07:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:51.727 07:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:51.727 07:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.727 07:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:51.727 07:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.728 07:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:51.728 07:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:51.728 [global] 00:09:51.728 thread=1 00:09:51.728 invalidate=1 00:09:51.728 rw=write 00:09:51.728 time_based=1 00:09:51.728 runtime=1 00:09:51.728 ioengine=libaio 00:09:51.728 direct=1 00:09:51.728 bs=4096 00:09:51.728 iodepth=1 00:09:51.728 norandommap=0 00:09:51.728 numjobs=1 00:09:51.728 00:09:51.728 verify_dump=1 00:09:51.728 verify_backlog=512 00:09:51.728 verify_state_save=0 00:09:51.728 do_verify=1 00:09:51.728 verify=crc32c-intel 00:09:51.728 [job0] 00:09:51.728 filename=/dev/nvme0n1 00:09:51.728 [job1] 00:09:51.728 filename=/dev/nvme0n2 00:09:51.728 [job2] 00:09:51.728 filename=/dev/nvme0n3 00:09:51.728 [job3] 00:09:51.728 filename=/dev/nvme0n4 00:09:51.728 Could not set queue depth (nvme0n1) 00:09:51.728 Could not set queue depth (nvme0n2) 00:09:51.728 Could not set queue depth (nvme0n3) 00:09:51.728 Could not set queue depth (nvme0n4) 00:09:51.988 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.988 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.988 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.988 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.988 fio-3.35 00:09:51.988 Starting 4 threads 00:09:53.406 00:09:53.406 job0: (groupid=0, jobs=1): err= 0: pid=1134671: Wed Nov 20 07:10:27 2024 00:09:53.406 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:53.406 slat (nsec): min=6793, max=57424, avg=25403.30, stdev=3853.45 00:09:53.406 clat (usec): min=559, max=1280, avg=959.87, stdev=94.21 00:09:53.406 lat (usec): min=585, max=1306, avg=985.27, stdev=94.41 00:09:53.406 clat percentiles (usec): 00:09:53.406 | 1.00th=[ 717], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 889], 00:09:53.406 | 30.00th=[ 922], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:09:53.406 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1106], 00:09:53.406 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1287], 99.95th=[ 1287], 00:09:53.406 | 99.99th=[ 1287] 00:09:53.406 write: IOPS=742, BW=2969KiB/s (3040kB/s)(2972KiB/1001msec); 0 zone resets 00:09:53.406 slat (nsec): min=9607, max=84764, avg=31009.44, stdev=7798.91 00:09:53.406 clat (usec): min=266, max=947, avg=622.88, stdev=107.47 00:09:53.406 lat (usec): min=285, max=980, avg=653.89, stdev=110.83 00:09:53.406 clat percentiles (usec): 00:09:53.406 | 1.00th=[ 326], 5.00th=[ 420], 10.00th=[ 482], 20.00th=[ 545], 00:09:53.406 | 30.00th=[ 586], 40.00th=[ 611], 50.00th=[ 627], 60.00th=[ 652], 00:09:53.406 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 783], 00:09:53.406 | 99.00th=[ 840], 99.50th=[ 881], 99.90th=[ 947], 99.95th=[ 947], 00:09:53.406 | 99.99th=[ 947] 00:09:53.406 bw ( KiB/s): min= 4096, max= 4096, per=43.50%, avg=4096.00, stdev= 0.00, samples=1 00:09:53.406 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:53.406 lat (usec) : 500=7.41%, 750=46.45%, 1000=32.67% 00:09:53.406 lat (msec) : 2=13.47% 00:09:53.406 cpu : usr=2.40%, sys=3.30%, ctx=1256, majf=0, minf=1 00:09:53.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.406 issued rwts: total=512,743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.406 job1: (groupid=0, jobs=1): err= 0: pid=1134675: Wed Nov 20 07:10:27 2024 00:09:53.406 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:53.406 slat (nsec): min=7242, max=61437, avg=26777.40, stdev=3447.18 00:09:53.406 clat (usec): min=742, max=1347, avg=1041.56, stdev=115.14 00:09:53.406 lat (usec): min=768, max=1373, avg=1068.34, stdev=115.05 00:09:53.406 clat percentiles (usec): 00:09:53.406 | 1.00th=[ 807], 5.00th=[ 857], 10.00th=[ 906], 20.00th=[ 955], 00:09:53.406 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1045], 00:09:53.406 | 70.00th=[ 1090], 80.00th=[ 1156], 90.00th=[ 1205], 95.00th=[ 1254], 00:09:53.406 | 99.00th=[ 1287], 99.50th=[ 1319], 99.90th=[ 1352], 99.95th=[ 1352], 00:09:53.406 | 99.99th=[ 1352] 00:09:53.406 write: IOPS=647, BW=2589KiB/s (2652kB/s)(2592KiB/1001msec); 0 zone resets 00:09:53.406 slat (nsec): min=9437, max=52543, avg=31927.62, stdev=7530.12 00:09:53.406 clat (usec): min=296, max=1007, avg=652.93, stdev=123.39 00:09:53.406 lat (usec): min=321, max=1041, avg=684.86, stdev=125.22 00:09:53.406 clat percentiles (usec): 00:09:53.406 | 1.00th=[ 371], 5.00th=[ 424], 10.00th=[ 490], 20.00th=[ 553], 00:09:53.406 | 30.00th=[ 594], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 693], 00:09:53.406 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 824], 95.00th=[ 865], 00:09:53.406 | 99.00th=[ 914], 99.50th=[ 938], 99.90th=[ 1004], 99.95th=[ 1004], 00:09:53.406 | 99.99th=[ 1004] 00:09:53.406 bw ( KiB/s): min= 4096, max= 4096, per=43.50%, avg=4096.00, stdev= 0.00, samples=1 00:09:53.406 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:53.406 lat (usec) : 500=6.12%, 750=38.62%, 1000=28.88% 00:09:53.406 lat (msec) : 2=26.38% 00:09:53.406 cpu : usr=2.40%, sys=4.70%, ctx=1160, majf=0, minf=1 00:09:53.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.406 issued rwts: total=512,648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.406 job2: (groupid=0, jobs=1): err= 0: pid=1134678: Wed Nov 20 07:10:27 2024 00:09:53.406 read: IOPS=18, BW=74.1KiB/s (75.9kB/s)(76.0KiB/1026msec) 00:09:53.406 slat (nsec): min=26540, max=27939, avg=26878.53, stdev=301.95 00:09:53.406 clat (usec): min=40848, max=41999, avg=41204.60, stdev=437.29 00:09:53.406 lat (usec): min=40875, max=42026, avg=41231.48, stdev=437.30 00:09:53.406 clat percentiles (usec): 00:09:53.406 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:53.406 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:53.406 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:53.406 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:53.406 | 99.99th=[42206] 00:09:53.406 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:09:53.406 slat (nsec): min=10010, max=52352, avg=28624.21, stdev=10213.55 00:09:53.406 clat (usec): min=234, max=597, avg=438.13, stdev=70.73 00:09:53.406 lat (usec): min=258, max=632, avg=466.76, stdev=75.49 00:09:53.406 clat percentiles (usec): 00:09:53.406 | 1.00th=[ 262], 5.00th=[ 318], 10.00th=[ 338], 20.00th=[ 367], 00:09:53.406 | 30.00th=[ 416], 40.00th=[ 441], 50.00th=[ 453], 60.00th=[ 465], 00:09:53.406 | 70.00th=[ 478], 80.00th=[ 498], 90.00th=[ 519], 95.00th=[ 537], 00:09:53.406 | 99.00th=[ 562], 99.50th=[ 594], 99.90th=[ 594], 99.95th=[ 594], 00:09:53.406 | 99.99th=[ 594] 00:09:53.406 bw ( KiB/s): min= 4096, max= 4096, per=43.50%, avg=4096.00, stdev= 0.00, samples=1 00:09:53.406 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:53.406 lat (usec) : 250=0.38%, 500=78.34%, 750=17.70% 00:09:53.406 lat (msec) : 50=3.58% 00:09:53.406 cpu : usr=0.88%, sys=1.17%, ctx=531, majf=0, minf=1 00:09:53.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.406 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.406 job3: (groupid=0, jobs=1): err= 0: pid=1134679: Wed Nov 20 07:10:27 2024 00:09:53.406 read: IOPS=51, BW=208KiB/s (213kB/s)(208KiB/1001msec) 00:09:53.406 slat (nsec): min=25708, max=40269, avg=26523.38, stdev=2028.42 00:09:53.406 clat (usec): min=729, max=43003, avg=12794.72, stdev=18742.10 00:09:53.406 lat (usec): min=755, max=43031, avg=12821.24, stdev=18742.14 00:09:53.407 clat percentiles (usec): 00:09:53.407 | 1.00th=[ 734], 5.00th=[ 791], 10.00th=[ 857], 20.00th=[ 947], 00:09:53.407 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1074], 00:09:53.407 | 70.00th=[ 1352], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:53.407 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:53.407 | 99.99th=[43254] 00:09:53.407 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:53.407 slat (nsec): min=9357, max=52570, avg=30074.94, stdev=9815.31 00:09:53.407 clat (usec): min=167, max=966, avg=614.29, stdev=114.50 00:09:53.407 lat (usec): min=179, max=1000, avg=644.36, stdev=118.94 00:09:53.407 clat percentiles (usec): 00:09:53.407 | 1.00th=[ 355], 5.00th=[ 412], 10.00th=[ 457], 20.00th=[ 529], 00:09:53.407 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 652], 00:09:53.407 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 791], 00:09:53.407 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 963], 99.95th=[ 963], 00:09:53.407 | 99.99th=[ 963] 00:09:53.407 bw ( KiB/s): min= 4096, max= 4096, per=43.50%, avg=4096.00, stdev= 0.00, samples=1 00:09:53.407 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:53.407 lat (usec) : 250=0.18%, 500=14.36%, 750=66.84%, 1000=12.94% 00:09:53.407 lat (msec) : 2=3.01%, 50=2.66% 00:09:53.407 cpu : usr=1.30%, sys=1.80%, ctx=564, majf=0, minf=1 00:09:53.407 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.407 issued rwts: total=52,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.407 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.407 00:09:53.407 Run status group 0 (all jobs): 00:09:53.407 READ: bw=4269KiB/s (4371kB/s), 74.1KiB/s-2046KiB/s (75.9kB/s-2095kB/s), io=4380KiB (4485kB), run=1001-1026msec 00:09:53.407 WRITE: bw=9415KiB/s (9641kB/s), 1996KiB/s-2969KiB/s (2044kB/s-3040kB/s), io=9660KiB (9892kB), run=1001-1026msec 00:09:53.407 00:09:53.407 Disk stats (read/write): 00:09:53.407 nvme0n1: ios=479/512, merge=0/0, ticks=476/302, in_queue=778, util=80.16% 00:09:53.407 nvme0n2: ios=394/512, merge=0/0, ticks=831/269, in_queue=1100, util=88.05% 00:09:53.407 nvme0n3: ios=18/512, merge=0/0, ticks=743/213, in_queue=956, util=89.26% 00:09:53.407 nvme0n4: ios=11/512, merge=0/0, ticks=422/251, in_queue=673, util=88.53% 00:09:53.407 07:10:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:53.407 [global] 00:09:53.407 thread=1 00:09:53.407 invalidate=1 00:09:53.407 rw=randwrite 00:09:53.407 time_based=1 00:09:53.407 runtime=1 00:09:53.407 ioengine=libaio 00:09:53.407 direct=1 00:09:53.407 bs=4096 00:09:53.407 iodepth=1 00:09:53.407 norandommap=0 00:09:53.407 numjobs=1 00:09:53.407 00:09:53.407 verify_dump=1 00:09:53.407 verify_backlog=512 00:09:53.407 verify_state_save=0 00:09:53.407 do_verify=1 00:09:53.407 verify=crc32c-intel 00:09:53.407 [job0] 00:09:53.407 filename=/dev/nvme0n1 00:09:53.407 [job1] 00:09:53.407 filename=/dev/nvme0n2 00:09:53.407 [job2] 00:09:53.407 filename=/dev/nvme0n3 00:09:53.407 [job3] 00:09:53.407 filename=/dev/nvme0n4 00:09:53.407 Could not set queue depth (nvme0n1) 00:09:53.407 Could not set queue depth (nvme0n2) 00:09:53.407 Could not set queue depth (nvme0n3) 00:09:53.407 Could not set queue depth (nvme0n4) 00:09:53.671 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.671 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.671 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.671 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.671 fio-3.35 00:09:53.671 Starting 4 threads 00:09:55.082 00:09:55.082 job0: (groupid=0, jobs=1): err= 0: pid=1135202: Wed Nov 20 07:10:29 2024 00:09:55.082 read: IOPS=456, BW=1826KiB/s (1870kB/s)(1828KiB/1001msec) 00:09:55.082 slat (nsec): min=6643, max=61709, avg=26007.07, stdev=4520.67 00:09:55.082 clat (usec): min=490, max=42609, avg=1565.35, stdev=5061.78 00:09:55.082 lat (usec): min=516, max=42634, avg=1591.35, stdev=5061.68 00:09:55.082 clat percentiles (usec): 00:09:55.082 | 1.00th=[ 562], 5.00th=[ 685], 10.00th=[ 734], 20.00th=[ 840], 00:09:55.082 | 30.00th=[ 881], 40.00th=[ 930], 50.00th=[ 971], 60.00th=[ 996], 00:09:55.082 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1090], 95.00th=[ 1123], 00:09:55.082 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:09:55.082 | 99.99th=[42730] 00:09:55.082 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:55.082 slat (nsec): min=9564, max=69519, avg=29338.02, stdev=8734.07 00:09:55.082 clat (usec): min=131, max=803, avg=488.36, stdev=142.04 00:09:55.082 lat (usec): min=142, max=838, avg=517.70, stdev=144.02 00:09:55.082 clat percentiles (usec): 00:09:55.082 | 1.00th=[ 161], 5.00th=[ 265], 10.00th=[ 289], 20.00th=[ 351], 00:09:55.082 | 30.00th=[ 408], 40.00th=[ 445], 50.00th=[ 506], 60.00th=[ 537], 00:09:55.082 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 668], 95.00th=[ 693], 00:09:55.082 | 99.00th=[ 758], 99.50th=[ 791], 99.90th=[ 807], 99.95th=[ 807], 00:09:55.082 | 99.99th=[ 807] 00:09:55.082 bw ( KiB/s): min= 4096, max= 4096, per=35.81%, avg=4096.00, stdev= 0.00, samples=1 00:09:55.082 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:55.082 lat (usec) : 250=1.55%, 500=24.36%, 750=31.68%, 1000=25.39% 00:09:55.082 lat (msec) : 2=16.31%, 50=0.72% 00:09:55.082 cpu : usr=1.70%, sys=3.00%, ctx=970, majf=0, minf=1 00:09:55.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.082 issued rwts: total=457,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.082 job1: (groupid=0, jobs=1): err= 0: pid=1135203: Wed Nov 20 07:10:29 2024 00:09:55.082 read: IOPS=610, BW=2442KiB/s (2500kB/s)(2444KiB/1001msec) 00:09:55.082 slat (nsec): min=6896, max=61118, avg=25318.79, stdev=5928.33 00:09:55.082 clat (usec): min=207, max=1093, avg=816.59, stdev=114.68 00:09:55.082 lat (usec): min=233, max=1119, avg=841.91, stdev=115.67 00:09:55.082 clat percentiles (usec): 00:09:55.082 | 1.00th=[ 474], 5.00th=[ 627], 10.00th=[ 668], 20.00th=[ 717], 00:09:55.083 | 30.00th=[ 758], 40.00th=[ 799], 50.00th=[ 832], 60.00th=[ 865], 00:09:55.083 | 70.00th=[ 889], 80.00th=[ 914], 90.00th=[ 955], 95.00th=[ 971], 00:09:55.083 | 99.00th=[ 1029], 99.50th=[ 1045], 99.90th=[ 1090], 99.95th=[ 1090], 00:09:55.083 | 99.99th=[ 1090] 00:09:55.083 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:55.083 slat (nsec): min=9229, max=63495, avg=27362.73, stdev=10711.49 00:09:55.083 clat (usec): min=118, max=959, avg=432.07, stdev=126.27 00:09:55.083 lat (usec): min=129, max=992, avg=459.44, stdev=127.48 00:09:55.083 clat percentiles (usec): 00:09:55.083 | 1.00th=[ 200], 5.00th=[ 237], 10.00th=[ 277], 20.00th=[ 330], 00:09:55.083 | 30.00th=[ 355], 40.00th=[ 383], 50.00th=[ 424], 60.00th=[ 457], 00:09:55.083 | 70.00th=[ 490], 80.00th=[ 537], 90.00th=[ 603], 95.00th=[ 652], 00:09:55.083 | 99.00th=[ 742], 99.50th=[ 857], 99.90th=[ 947], 99.95th=[ 963], 00:09:55.083 | 99.99th=[ 963] 00:09:55.083 bw ( KiB/s): min= 4096, max= 4096, per=35.81%, avg=4096.00, stdev= 0.00, samples=1 00:09:55.083 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:55.083 lat (usec) : 250=3.91%, 500=42.20%, 750=26.42%, 1000=26.73% 00:09:55.083 lat (msec) : 2=0.73% 00:09:55.083 cpu : usr=2.70%, sys=4.10%, ctx=1636, majf=0, minf=1 00:09:55.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.083 issued rwts: total=611,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.083 job2: (groupid=0, jobs=1): err= 0: pid=1135204: Wed Nov 20 07:10:29 2024 00:09:55.083 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:55.083 slat (nsec): min=7151, max=63415, avg=28506.05, stdev=3658.53 00:09:55.083 clat (usec): min=398, max=1164, avg=884.74, stdev=124.67 00:09:55.083 lat (usec): min=427, max=1193, avg=913.25, stdev=124.96 00:09:55.083 clat percentiles (usec): 00:09:55.083 | 1.00th=[ 502], 5.00th=[ 676], 10.00th=[ 725], 20.00th=[ 791], 00:09:55.083 | 30.00th=[ 840], 40.00th=[ 873], 50.00th=[ 898], 60.00th=[ 922], 00:09:55.083 | 70.00th=[ 947], 80.00th=[ 979], 90.00th=[ 1029], 95.00th=[ 1057], 00:09:55.083 | 99.00th=[ 1156], 99.50th=[ 1156], 99.90th=[ 1172], 99.95th=[ 1172], 00:09:55.083 | 99.99th=[ 1172] 00:09:55.083 write: IOPS=813, BW=3253KiB/s (3331kB/s)(3256KiB/1001msec); 0 zone resets 00:09:55.083 slat (nsec): min=9447, max=62334, avg=32027.54, stdev=9810.23 00:09:55.083 clat (usec): min=190, max=988, avg=605.51, stdev=132.66 00:09:55.083 lat (usec): min=201, max=1023, avg=637.53, stdev=136.04 00:09:55.083 clat percentiles (usec): 00:09:55.083 | 1.00th=[ 293], 5.00th=[ 383], 10.00th=[ 424], 20.00th=[ 494], 00:09:55.083 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:09:55.083 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 816], 00:09:55.083 | 99.00th=[ 922], 99.50th=[ 947], 99.90th=[ 988], 99.95th=[ 988], 00:09:55.083 | 99.99th=[ 988] 00:09:55.083 bw ( KiB/s): min= 4096, max= 4096, per=35.81%, avg=4096.00, stdev= 0.00, samples=1 00:09:55.083 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:55.083 lat (usec) : 250=0.38%, 500=13.12%, 750=45.48%, 1000=34.46% 00:09:55.083 lat (msec) : 2=6.56% 00:09:55.083 cpu : usr=3.10%, sys=5.10%, ctx=1329, majf=0, minf=1 00:09:55.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.083 issued rwts: total=512,814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.083 job3: (groupid=0, jobs=1): err= 0: pid=1135205: Wed Nov 20 07:10:29 2024 00:09:55.083 read: IOPS=41, BW=168KiB/s (172kB/s)(168KiB/1001msec) 00:09:55.083 slat (nsec): min=27229, max=73276, avg=28652.12, stdev=7060.33 00:09:55.083 clat (usec): min=1002, max=42436, avg=15750.15, stdev=19796.89 00:09:55.083 lat (usec): min=1029, max=42463, avg=15778.80, stdev=19796.16 00:09:55.083 clat percentiles (usec): 00:09:55.083 | 1.00th=[ 1004], 5.00th=[ 1074], 10.00th=[ 1090], 20.00th=[ 1156], 00:09:55.083 | 30.00th=[ 1172], 40.00th=[ 1205], 50.00th=[ 1221], 60.00th=[ 1287], 00:09:55.083 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:55.083 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:55.083 | 99.99th=[42206] 00:09:55.083 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:55.083 slat (nsec): min=9016, max=66790, avg=30109.86, stdev=9804.47 00:09:55.083 clat (usec): min=247, max=1228, avg=614.00, stdev=127.92 00:09:55.083 lat (usec): min=279, max=1247, avg=644.11, stdev=131.47 00:09:55.083 clat percentiles (usec): 00:09:55.083 | 1.00th=[ 302], 5.00th=[ 404], 10.00th=[ 453], 20.00th=[ 502], 00:09:55.083 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 644], 00:09:55.083 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 807], 00:09:55.083 | 99.00th=[ 947], 99.50th=[ 1020], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:55.083 | 99.99th=[ 1221] 00:09:55.083 bw ( KiB/s): min= 4096, max= 4096, per=35.81%, avg=4096.00, stdev= 0.00, samples=1 00:09:55.083 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:55.083 lat (usec) : 250=0.18%, 500=17.87%, 750=63.72%, 1000=9.93% 00:09:55.083 lat (msec) : 2=5.60%, 50=2.71% 00:09:55.083 cpu : usr=0.80%, sys=2.50%, ctx=556, majf=0, minf=1 00:09:55.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.083 issued rwts: total=42,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.083 00:09:55.083 Run status group 0 (all jobs): 00:09:55.083 READ: bw=6482KiB/s (6637kB/s), 168KiB/s-2442KiB/s (172kB/s-2500kB/s), io=6488KiB (6644kB), run=1001-1001msec 00:09:55.083 WRITE: bw=11.2MiB/s (11.7MB/s), 2046KiB/s-4092KiB/s (2095kB/s-4190kB/s), io=11.2MiB (11.7MB), run=1001-1001msec 00:09:55.083 00:09:55.083 Disk stats (read/write): 00:09:55.083 nvme0n1: ios=345/512, merge=0/0, ticks=685/238, in_queue=923, util=95.19% 00:09:55.083 nvme0n2: ios=550/843, merge=0/0, ticks=653/353, in_queue=1006, util=97.85% 00:09:55.083 nvme0n3: ios=569/547, merge=0/0, ticks=723/274, in_queue=997, util=96.17% 00:09:55.083 nvme0n4: ios=49/512, merge=0/0, ticks=620/255, in_queue=875, util=100.00% 00:09:55.083 07:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:55.083 [global] 00:09:55.083 thread=1 00:09:55.083 invalidate=1 00:09:55.083 rw=write 00:09:55.083 time_based=1 00:09:55.083 runtime=1 00:09:55.083 ioengine=libaio 00:09:55.083 direct=1 00:09:55.083 bs=4096 00:09:55.083 iodepth=128 00:09:55.083 norandommap=0 00:09:55.083 numjobs=1 00:09:55.083 00:09:55.083 verify_dump=1 00:09:55.083 verify_backlog=512 00:09:55.083 verify_state_save=0 00:09:55.083 do_verify=1 00:09:55.083 verify=crc32c-intel 00:09:55.083 [job0] 00:09:55.083 filename=/dev/nvme0n1 00:09:55.083 [job1] 00:09:55.083 filename=/dev/nvme0n2 00:09:55.083 [job2] 00:09:55.083 filename=/dev/nvme0n3 00:09:55.083 [job3] 00:09:55.083 filename=/dev/nvme0n4 00:09:55.083 Could not set queue depth (nvme0n1) 00:09:55.083 Could not set queue depth (nvme0n2) 00:09:55.083 Could not set queue depth (nvme0n3) 00:09:55.083 Could not set queue depth (nvme0n4) 00:09:55.344 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.344 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.344 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.344 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.344 fio-3.35 00:09:55.344 Starting 4 threads 00:09:56.750 00:09:56.750 job0: (groupid=0, jobs=1): err= 0: pid=1135725: Wed Nov 20 07:10:31 2024 00:09:56.750 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:09:56.750 slat (nsec): min=877, max=14603k, avg=68618.83, stdev=384603.20 00:09:56.750 clat (usec): min=5749, max=27264, avg=8703.44, stdev=2251.24 00:09:56.750 lat (usec): min=5750, max=27312, avg=8772.05, stdev=2275.77 00:09:56.750 clat percentiles (usec): 00:09:56.750 | 1.00th=[ 6390], 5.00th=[ 7046], 10.00th=[ 7570], 20.00th=[ 7963], 00:09:56.750 | 30.00th=[ 8094], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[ 8586], 00:09:56.750 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[ 9503], 95.00th=[ 9896], 00:09:56.750 | 99.00th=[24773], 99.50th=[25297], 99.90th=[26084], 99.95th=[26084], 00:09:56.750 | 99.99th=[27395] 00:09:56.750 write: IOPS=7615, BW=29.7MiB/s (31.2MB/s)(29.8MiB/1003msec); 0 zone resets 00:09:56.750 slat (nsec): min=1521, max=10460k, avg=63642.15, stdev=346094.46 00:09:56.750 clat (usec): min=854, max=64051, avg=8486.22, stdev=5952.59 00:09:56.750 lat (usec): min=1242, max=64281, avg=8549.86, stdev=5980.65 00:09:56.750 clat percentiles (usec): 00:09:56.750 | 1.00th=[ 4555], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 6980], 00:09:56.750 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7767], 60.00th=[ 8029], 00:09:56.750 | 70.00th=[ 8160], 80.00th=[ 8291], 90.00th=[ 8717], 95.00th=[ 9241], 00:09:56.750 | 99.00th=[48497], 99.50th=[58459], 99.90th=[63701], 99.95th=[64226], 00:09:56.750 | 99.99th=[64226] 00:09:56.750 bw ( KiB/s): min=28672, max=31408, per=29.64%, avg=30040.00, stdev=1934.64, samples=2 00:09:56.750 iops : min= 7168, max= 7852, avg=7510.00, stdev=483.66, samples=2 00:09:56.750 lat (usec) : 1000=0.01% 00:09:56.750 lat (msec) : 2=0.09%, 4=0.40%, 10=95.41%, 20=2.02%, 50=1.63% 00:09:56.750 lat (msec) : 100=0.44% 00:09:56.750 cpu : usr=2.50%, sys=4.19%, ctx=931, majf=0, minf=2 00:09:56.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:56.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.750 issued rwts: total=7168,7638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.750 job1: (groupid=0, jobs=1): err= 0: pid=1135726: Wed Nov 20 07:10:31 2024 00:09:56.750 read: IOPS=8652, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1006msec) 00:09:56.750 slat (nsec): min=947, max=6850.2k, avg=55771.54, stdev=421601.34 00:09:56.750 clat (usec): min=2189, max=17964, avg=7491.84, stdev=1719.58 00:09:56.750 lat (usec): min=2196, max=17972, avg=7547.61, stdev=1747.94 00:09:56.750 clat percentiles (usec): 00:09:56.750 | 1.00th=[ 3195], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6390], 00:09:56.750 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7504], 00:09:56.750 | 70.00th=[ 7767], 80.00th=[ 8455], 90.00th=[10159], 95.00th=[11076], 00:09:56.750 | 99.00th=[12387], 99.50th=[12780], 99.90th=[14091], 99.95th=[17957], 00:09:56.750 | 99.99th=[17957] 00:09:56.750 write: IOPS=8825, BW=34.5MiB/s (36.1MB/s)(34.7MiB/1006msec); 0 zone resets 00:09:56.750 slat (nsec): min=1615, max=6514.3k, avg=47861.79, stdev=294680.10 00:09:56.750 clat (usec): min=832, max=40490, avg=7010.09, stdev=4145.35 00:09:56.750 lat (usec): min=840, max=40493, avg=7057.95, stdev=4169.21 00:09:56.750 clat percentiles (usec): 00:09:56.750 | 1.00th=[ 2040], 5.00th=[ 3621], 10.00th=[ 4113], 20.00th=[ 4948], 00:09:56.750 | 30.00th=[ 5997], 40.00th=[ 6390], 50.00th=[ 6718], 60.00th=[ 6915], 00:09:56.750 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 8979], 95.00th=[12518], 00:09:56.750 | 99.00th=[32900], 99.50th=[37487], 99.90th=[39584], 99.95th=[39584], 00:09:56.750 | 99.99th=[40633] 00:09:56.750 bw ( KiB/s): min=32944, max=37064, per=34.54%, avg=35004.00, stdev=2913.28, samples=2 00:09:56.750 iops : min= 8236, max= 9266, avg=8751.00, stdev=728.32, samples=2 00:09:56.750 lat (usec) : 1000=0.06% 00:09:56.750 lat (msec) : 2=0.38%, 4=4.24%, 10=87.03%, 20=7.24%, 50=1.06% 00:09:56.750 cpu : usr=6.27%, sys=9.05%, ctx=791, majf=0, minf=1 00:09:56.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:56.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.750 issued rwts: total=8704,8878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.750 job2: (groupid=0, jobs=1): err= 0: pid=1135728: Wed Nov 20 07:10:31 2024 00:09:56.750 read: IOPS=4206, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1004msec) 00:09:56.750 slat (nsec): min=957, max=10559k, avg=108336.39, stdev=738717.47 00:09:56.750 clat (usec): min=3496, max=54953, avg=11863.11, stdev=5428.22 00:09:56.750 lat (usec): min=4118, max=54962, avg=11971.45, stdev=5501.61 00:09:56.750 clat percentiles (usec): 00:09:56.750 | 1.00th=[ 4752], 5.00th=[ 7439], 10.00th=[ 8455], 20.00th=[ 9110], 00:09:56.750 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10945], 00:09:56.750 | 70.00th=[11731], 80.00th=[12911], 90.00th=[17957], 95.00th=[22676], 00:09:56.750 | 99.00th=[31589], 99.50th=[41681], 99.90th=[52691], 99.95th=[52691], 00:09:56.750 | 99.99th=[54789] 00:09:56.750 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:09:56.750 slat (nsec): min=1672, max=8580.1k, avg=112729.64, stdev=506304.84 00:09:56.750 clat (usec): min=1174, max=60855, avg=16769.67, stdev=9663.01 00:09:56.750 lat (usec): min=1185, max=60861, avg=16882.40, stdev=9718.79 00:09:56.750 clat percentiles (usec): 00:09:56.750 | 1.00th=[ 2409], 5.00th=[ 5276], 10.00th=[ 7111], 20.00th=[ 8094], 00:09:56.750 | 30.00th=[10814], 40.00th=[13566], 50.00th=[15401], 60.00th=[16319], 00:09:56.750 | 70.00th=[20841], 80.00th=[23462], 90.00th=[26870], 95.00th=[32900], 00:09:56.750 | 99.00th=[55313], 99.50th=[58459], 99.90th=[61080], 99.95th=[61080], 00:09:56.750 | 99.99th=[61080] 00:09:56.750 bw ( KiB/s): min=16880, max=19984, per=18.19%, avg=18432.00, stdev=2194.86, samples=2 00:09:56.751 iops : min= 4220, max= 4996, avg=4608.00, stdev=548.71, samples=2 00:09:56.751 lat (msec) : 2=0.37%, 4=0.99%, 10=39.09%, 20=39.08%, 50=19.42% 00:09:56.751 lat (msec) : 100=1.05% 00:09:56.751 cpu : usr=3.19%, sys=4.69%, ctx=516, majf=0, minf=1 00:09:56.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:56.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.751 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.751 job3: (groupid=0, jobs=1): err= 0: pid=1135729: Wed Nov 20 07:10:31 2024 00:09:56.751 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:09:56.751 slat (nsec): min=943, max=12347k, avg=106461.77, stdev=737837.41 00:09:56.751 clat (usec): min=1690, max=54477, avg=13812.30, stdev=6772.07 00:09:56.751 lat (usec): min=1709, max=54483, avg=13918.76, stdev=6828.36 00:09:56.751 clat percentiles (usec): 00:09:56.751 | 1.00th=[ 5473], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[10290], 00:09:56.751 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:09:56.751 | 70.00th=[12518], 80.00th=[16188], 90.00th=[24773], 95.00th=[30802], 00:09:56.751 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:09:56.751 | 99.99th=[54264] 00:09:56.751 write: IOPS=4340, BW=17.0MiB/s (17.8MB/s)(17.1MiB/1006msec); 0 zone resets 00:09:56.751 slat (nsec): min=1641, max=17771k, avg=120352.94, stdev=744674.42 00:09:56.751 clat (usec): min=889, max=79633, avg=16261.62, stdev=11780.41 00:09:56.751 lat (usec): min=919, max=79635, avg=16381.97, stdev=11859.35 00:09:56.751 clat percentiles (usec): 00:09:56.751 | 1.00th=[ 2474], 5.00th=[ 5145], 10.00th=[ 6652], 20.00th=[ 9241], 00:09:56.751 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[11863], 60.00th=[15139], 00:09:56.751 | 70.00th=[15664], 80.00th=[21627], 90.00th=[33817], 95.00th=[40109], 00:09:56.751 | 99.00th=[65274], 99.50th=[71828], 99.90th=[79168], 99.95th=[79168], 00:09:56.751 | 99.99th=[79168] 00:09:56.751 bw ( KiB/s): min=13432, max=20480, per=16.73%, avg=16956.00, stdev=4983.69, samples=2 00:09:56.751 iops : min= 3358, max= 5120, avg=4239.00, stdev=1245.92, samples=2 00:09:56.751 lat (usec) : 1000=0.06% 00:09:56.751 lat (msec) : 2=0.26%, 4=1.41%, 10=24.62%, 20=55.41%, 50=17.11% 00:09:56.751 lat (msec) : 100=1.13% 00:09:56.751 cpu : usr=3.78%, sys=4.18%, ctx=433, majf=0, minf=1 00:09:56.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:56.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.751 issued rwts: total=4096,4367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.751 00:09:56.751 Run status group 0 (all jobs): 00:09:56.751 READ: bw=93.9MiB/s (98.5MB/s), 15.9MiB/s-33.8MiB/s (16.7MB/s-35.4MB/s), io=94.5MiB (99.1MB), run=1003-1006msec 00:09:56.751 WRITE: bw=99.0MiB/s (104MB/s), 17.0MiB/s-34.5MiB/s (17.8MB/s-36.1MB/s), io=99.6MiB (104MB), run=1003-1006msec 00:09:56.751 00:09:56.751 Disk stats (read/write): 00:09:56.751 nvme0n1: ios=6121/6144, merge=0/0, ticks=17488/18429, in_queue=35917, util=87.68% 00:09:56.751 nvme0n2: ios=7202/7207, merge=0/0, ticks=50619/48892, in_queue=99511, util=91.43% 00:09:56.751 nvme0n3: ios=3323/3584, merge=0/0, ticks=39344/62968, in_queue=102312, util=88.37% 00:09:56.751 nvme0n4: ios=3584/3871, merge=0/0, ticks=29110/31817, in_queue=60927, util=89.51% 00:09:56.751 07:10:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:56.751 [global] 00:09:56.751 thread=1 00:09:56.751 invalidate=1 00:09:56.751 rw=randwrite 00:09:56.751 time_based=1 00:09:56.751 runtime=1 00:09:56.751 ioengine=libaio 00:09:56.751 direct=1 00:09:56.751 bs=4096 00:09:56.751 iodepth=128 00:09:56.751 norandommap=0 00:09:56.751 numjobs=1 00:09:56.751 00:09:56.751 verify_dump=1 00:09:56.751 verify_backlog=512 00:09:56.751 verify_state_save=0 00:09:56.751 do_verify=1 00:09:56.751 verify=crc32c-intel 00:09:56.751 [job0] 00:09:56.751 filename=/dev/nvme0n1 00:09:56.751 [job1] 00:09:56.751 filename=/dev/nvme0n2 00:09:56.751 [job2] 00:09:56.751 filename=/dev/nvme0n3 00:09:56.751 [job3] 00:09:56.751 filename=/dev/nvme0n4 00:09:56.751 Could not set queue depth (nvme0n1) 00:09:56.751 Could not set queue depth (nvme0n2) 00:09:56.751 Could not set queue depth (nvme0n3) 00:09:56.751 Could not set queue depth (nvme0n4) 00:09:57.010 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.010 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.010 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.010 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.010 fio-3.35 00:09:57.010 Starting 4 threads 00:09:58.412 00:09:58.412 job0: (groupid=0, jobs=1): err= 0: pid=1136226: Wed Nov 20 07:10:32 2024 00:09:58.412 read: IOPS=5833, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1007msec) 00:09:58.412 slat (nsec): min=900, max=8122.1k, avg=69188.28, stdev=451977.57 00:09:58.412 clat (usec): min=2341, max=34573, avg=8466.37, stdev=3024.77 00:09:58.412 lat (usec): min=4280, max=34575, avg=8535.56, stdev=3061.21 00:09:58.412 clat percentiles (usec): 00:09:58.412 | 1.00th=[ 4621], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 7046], 00:09:58.412 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 8029], 00:09:58.412 | 70.00th=[ 8455], 80.00th=[ 9241], 90.00th=[10683], 95.00th=[12911], 00:09:58.412 | 99.00th=[21627], 99.50th=[26346], 99.90th=[33817], 99.95th=[34341], 00:09:58.412 | 99.99th=[34341] 00:09:58.412 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:09:58.412 slat (nsec): min=1483, max=27615k, avg=92283.22, stdev=633088.83 00:09:58.412 clat (usec): min=1200, max=76368, avg=12701.01, stdev=11418.60 00:09:58.412 lat (usec): min=1210, max=76375, avg=12793.30, stdev=11478.59 00:09:58.412 clat percentiles (usec): 00:09:58.412 | 1.00th=[ 3556], 5.00th=[ 5473], 10.00th=[ 6521], 20.00th=[ 6980], 00:09:58.412 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 8356], 00:09:58.412 | 70.00th=[10552], 80.00th=[15008], 90.00th=[28967], 95.00th=[39584], 00:09:58.413 | 99.00th=[69731], 99.50th=[72877], 99.90th=[76022], 99.95th=[76022], 00:09:58.413 | 99.99th=[76022] 00:09:58.413 bw ( KiB/s): min=20480, max=28672, per=24.27%, avg=24576.00, stdev=5792.62, samples=2 00:09:58.413 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:09:58.413 lat (msec) : 2=0.06%, 4=0.97%, 10=76.03%, 20=14.00%, 50=8.21% 00:09:58.413 lat (msec) : 100=0.73% 00:09:58.413 cpu : usr=4.08%, sys=5.37%, ctx=599, majf=0, minf=1 00:09:58.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:58.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.413 issued rwts: total=5874,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.413 job1: (groupid=0, jobs=1): err= 0: pid=1136240: Wed Nov 20 07:10:32 2024 00:09:58.413 read: IOPS=6398, BW=25.0MiB/s (26.2MB/s)(25.1MiB/1006msec) 00:09:58.413 slat (nsec): min=886, max=23152k, avg=83697.97, stdev=675563.84 00:09:58.413 clat (usec): min=2215, max=57491, avg=10909.07, stdev=7078.59 00:09:58.413 lat (usec): min=2224, max=57514, avg=10992.77, stdev=7145.11 00:09:58.413 clat percentiles (usec): 00:09:58.413 | 1.00th=[ 4178], 5.00th=[ 5669], 10.00th=[ 6652], 20.00th=[ 7177], 00:09:58.413 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7898], 60.00th=[ 9241], 00:09:58.413 | 70.00th=[ 9896], 80.00th=[11338], 90.00th=[21627], 95.00th=[27395], 00:09:58.413 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:09:58.413 | 99.99th=[57410] 00:09:58.413 write: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec); 0 zone resets 00:09:58.413 slat (nsec): min=1472, max=9933.4k, avg=53792.20, stdev=338470.25 00:09:58.413 clat (usec): min=666, max=65466, avg=8016.57, stdev=5914.13 00:09:58.413 lat (usec): min=675, max=65474, avg=8070.36, stdev=5930.61 00:09:58.413 clat percentiles (usec): 00:09:58.413 | 1.00th=[ 1045], 5.00th=[ 2089], 10.00th=[ 4490], 20.00th=[ 5866], 00:09:58.413 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7439], 00:09:58.413 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[ 9634], 95.00th=[13829], 00:09:58.413 | 99.00th=[40109], 99.50th=[56886], 99.90th=[60556], 99.95th=[65274], 00:09:58.413 | 99.99th=[65274] 00:09:58.413 bw ( KiB/s): min=25992, max=31352, per=28.31%, avg=28672.00, stdev=3790.09, samples=2 00:09:58.413 iops : min= 6498, max= 7838, avg=7168.00, stdev=947.52, samples=2 00:09:58.413 lat (usec) : 750=0.07%, 1000=0.35% 00:09:58.413 lat (msec) : 2=2.09%, 4=2.49%, 10=76.65%, 20=11.47%, 50=6.59% 00:09:58.413 lat (msec) : 100=0.29% 00:09:58.413 cpu : usr=4.08%, sys=5.97%, ctx=808, majf=0, minf=3 00:09:58.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:58.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.413 issued rwts: total=6437,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.413 job2: (groupid=0, jobs=1): err= 0: pid=1136253: Wed Nov 20 07:10:32 2024 00:09:58.413 read: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec) 00:09:58.413 slat (nsec): min=921, max=4498.5k, avg=65620.27, stdev=426304.88 00:09:58.413 clat (usec): min=5194, max=13136, avg=8451.95, stdev=1078.82 00:09:58.413 lat (usec): min=5200, max=13163, avg=8517.57, stdev=1131.12 00:09:58.413 clat percentiles (usec): 00:09:58.413 | 1.00th=[ 5669], 5.00th=[ 6259], 10.00th=[ 7046], 20.00th=[ 7963], 00:09:58.413 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8586], 00:09:58.413 | 70.00th=[ 8848], 80.00th=[ 8979], 90.00th=[ 9372], 95.00th=[10290], 00:09:58.413 | 99.00th=[11600], 99.50th=[11994], 99.90th=[12780], 99.95th=[13042], 00:09:58.413 | 99.99th=[13173] 00:09:58.413 write: IOPS=7724, BW=30.2MiB/s (31.6MB/s)(30.3MiB/1004msec); 0 zone resets 00:09:58.413 slat (nsec): min=1514, max=4051.6k, avg=59531.24, stdev=299180.84 00:09:58.413 clat (usec): min=836, max=12219, avg=8036.40, stdev=958.15 00:09:58.413 lat (usec): min=3728, max=12252, avg=8095.93, stdev=983.86 00:09:58.413 clat percentiles (usec): 00:09:58.413 | 1.00th=[ 4621], 5.00th=[ 6390], 10.00th=[ 7046], 20.00th=[ 7635], 00:09:58.413 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8094], 60.00th=[ 8225], 00:09:58.413 | 70.00th=[ 8356], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 9241], 00:09:58.413 | 99.00th=[11076], 99.50th=[11207], 99.90th=[11863], 99.95th=[12125], 00:09:58.413 | 99.99th=[12256] 00:09:58.413 bw ( KiB/s): min=30312, max=31128, per=30.34%, avg=30720.00, stdev=577.00, samples=2 00:09:58.413 iops : min= 7578, max= 7782, avg=7680.00, stdev=144.25, samples=2 00:09:58.413 lat (usec) : 1000=0.01% 00:09:58.413 lat (msec) : 4=0.16%, 10=94.93%, 20=4.90% 00:09:58.413 cpu : usr=4.49%, sys=7.18%, ctx=831, majf=0, minf=1 00:09:58.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:58.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.413 issued rwts: total=7680,7755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.413 job3: (groupid=0, jobs=1): err= 0: pid=1136254: Wed Nov 20 07:10:32 2024 00:09:58.413 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:09:58.413 slat (nsec): min=956, max=9237.5k, avg=98199.99, stdev=653111.82 00:09:58.413 clat (usec): min=4272, max=36441, avg=11205.63, stdev=4526.50 00:09:58.413 lat (usec): min=4278, max=36449, avg=11303.83, stdev=4577.22 00:09:58.413 clat percentiles (usec): 00:09:58.413 | 1.00th=[ 4752], 5.00th=[ 7242], 10.00th=[ 8160], 20.00th=[ 8717], 00:09:58.413 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10290], 00:09:58.413 | 70.00th=[11076], 80.00th=[12649], 90.00th=[16909], 95.00th=[20055], 00:09:58.413 | 99.00th=[31589], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:09:58.413 | 99.99th=[36439] 00:09:58.413 write: IOPS=4457, BW=17.4MiB/s (18.3MB/s)(17.6MiB/1010msec); 0 zone resets 00:09:58.413 slat (nsec): min=1618, max=8100.9k, avg=127881.90, stdev=580899.38 00:09:58.413 clat (usec): min=1121, max=43986, avg=18279.53, stdev=9773.77 00:09:58.413 lat (usec): min=1132, max=43989, avg=18407.41, stdev=9833.26 00:09:58.413 clat percentiles (usec): 00:09:58.413 | 1.00th=[ 3130], 5.00th=[ 4948], 10.00th=[ 6521], 20.00th=[ 7898], 00:09:58.413 | 30.00th=[10814], 40.00th=[14615], 50.00th=[15795], 60.00th=[20579], 00:09:58.413 | 70.00th=[24249], 80.00th=[27657], 90.00th=[31327], 95.00th=[34341], 00:09:58.413 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:09:58.413 | 99.99th=[43779] 00:09:58.413 bw ( KiB/s): min=15424, max=19568, per=17.28%, avg=17496.00, stdev=2930.25, samples=2 00:09:58.413 iops : min= 3856, max= 4892, avg=4374.00, stdev=732.56, samples=2 00:09:58.413 lat (msec) : 2=0.02%, 4=0.95%, 10=39.97%, 20=34.40%, 50=24.65% 00:09:58.413 cpu : usr=3.27%, sys=4.86%, ctx=499, majf=0, minf=2 00:09:58.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:58.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.413 issued rwts: total=4096,4502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.413 00:09:58.413 Run status group 0 (all jobs): 00:09:58.413 READ: bw=93.2MiB/s (97.7MB/s), 15.8MiB/s-29.9MiB/s (16.6MB/s-31.3MB/s), io=94.1MiB (98.7MB), run=1004-1010msec 00:09:58.413 WRITE: bw=98.9MiB/s (104MB/s), 17.4MiB/s-30.2MiB/s (18.3MB/s-31.6MB/s), io=99.9MiB (105MB), run=1004-1010msec 00:09:58.413 00:09:58.413 Disk stats (read/write): 00:09:58.413 nvme0n1: ios=5170/5495, merge=0/0, ticks=24865/39156, in_queue=64021, util=93.59% 00:09:58.413 nvme0n2: ios=5152/5759, merge=0/0, ticks=29965/28479, in_queue=58444, util=91.12% 00:09:58.413 nvme0n3: ios=6171/6607, merge=0/0, ticks=25313/24320, in_queue=49633, util=92.19% 00:09:58.413 nvme0n4: ios=3584/3719, merge=0/0, ticks=38566/63374, in_queue=101940, util=89.41% 00:09:58.413 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:58.413 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1136301 00:09:58.413 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:58.413 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:58.413 [global] 00:09:58.413 thread=1 00:09:58.413 invalidate=1 00:09:58.413 rw=read 00:09:58.413 time_based=1 00:09:58.413 runtime=10 00:09:58.413 ioengine=libaio 00:09:58.413 direct=1 00:09:58.413 bs=4096 00:09:58.413 iodepth=1 00:09:58.413 norandommap=1 00:09:58.413 numjobs=1 00:09:58.413 00:09:58.413 [job0] 00:09:58.413 filename=/dev/nvme0n1 00:09:58.413 [job1] 00:09:58.413 filename=/dev/nvme0n2 00:09:58.413 [job2] 00:09:58.413 filename=/dev/nvme0n3 00:09:58.413 [job3] 00:09:58.413 filename=/dev/nvme0n4 00:09:58.413 Could not set queue depth (nvme0n1) 00:09:58.413 Could not set queue depth (nvme0n2) 00:09:58.413 Could not set queue depth (nvme0n3) 00:09:58.413 Could not set queue depth (nvme0n4) 00:09:58.676 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.676 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.676 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.676 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.676 fio-3.35 00:09:58.676 Starting 4 threads 00:10:01.221 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:01.482 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=11046912, buflen=4096 00:10:01.482 fio: pid=1136749, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:01.482 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:01.482 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=999424, buflen=4096 00:10:01.482 fio: pid=1136735, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:01.482 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.482 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:01.742 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6737920, buflen=4096 00:10:01.742 fio: pid=1136666, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:01.742 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.742 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:02.003 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.003 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:02.003 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=307200, buflen=4096 00:10:02.003 fio: pid=1136700, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:02.003 00:10:02.003 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1136666: Wed Nov 20 07:10:36 2024 00:10:02.003 read: IOPS=568, BW=2272KiB/s (2327kB/s)(6580KiB/2896msec) 00:10:02.003 slat (usec): min=6, max=297, avg=23.64, stdev=10.20 00:10:02.003 clat (usec): min=283, max=42004, avg=1724.77, stdev=6107.94 00:10:02.003 lat (usec): min=309, max=42030, avg=1748.41, stdev=6109.43 00:10:02.003 clat percentiles (usec): 00:10:02.003 | 1.00th=[ 490], 5.00th=[ 570], 10.00th=[ 644], 20.00th=[ 725], 00:10:02.003 | 30.00th=[ 766], 40.00th=[ 791], 50.00th=[ 807], 60.00th=[ 824], 00:10:02.003 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 906], 95.00th=[ 938], 00:10:02.003 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:02.003 | 99.99th=[42206] 00:10:02.003 bw ( KiB/s): min= 96, max= 4912, per=43.67%, avg=2603.20, stdev=2411.97, samples=5 00:10:02.003 iops : min= 24, max= 1228, avg=650.80, stdev=602.99, samples=5 00:10:02.003 lat (usec) : 500=1.34%, 750=23.88%, 1000=72.17% 00:10:02.003 lat (msec) : 2=0.24%, 50=2.31% 00:10:02.003 cpu : usr=0.55%, sys=1.55%, ctx=1647, majf=0, minf=1 00:10:02.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.003 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.003 issued rwts: total=1646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.003 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1136700: Wed Nov 20 07:10:36 2024 00:10:02.003 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(300KiB/3128msec) 00:10:02.003 slat (usec): min=9, max=14555, avg=398.46, stdev=2267.82 00:10:02.003 clat (usec): min=951, max=43062, avg=40998.53, stdev=6667.26 00:10:02.003 lat (usec): min=992, max=55991, avg=41401.96, stdev=7092.48 00:10:02.003 clat percentiles (usec): 00:10:02.003 | 1.00th=[ 955], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:10:02.003 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:02.003 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:10:02.003 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:02.003 | 99.99th=[43254] 00:10:02.003 bw ( KiB/s): min= 92, max= 104, per=1.61%, avg=96.67, stdev= 3.93, samples=6 00:10:02.003 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:10:02.003 lat (usec) : 1000=1.32% 00:10:02.003 lat (msec) : 2=1.32%, 50=96.05% 00:10:02.003 cpu : usr=0.13%, sys=0.00%, ctx=79, majf=0, minf=2 00:10:02.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.003 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.003 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.003 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1136735: Wed Nov 20 07:10:36 2024 00:10:02.003 read: IOPS=90, BW=359KiB/s (367kB/s)(976KiB/2722msec) 00:10:02.003 slat (usec): min=3, max=17182, avg=145.43, stdev=1496.06 00:10:02.003 clat (usec): min=484, max=43164, avg=10920.11, stdev=17746.85 00:10:02.003 lat (usec): min=488, max=43190, avg=11066.11, stdev=17742.68 00:10:02.003 clat percentiles (usec): 00:10:02.003 | 1.00th=[ 562], 5.00th=[ 627], 10.00th=[ 668], 20.00th=[ 742], 00:10:02.003 | 30.00th=[ 775], 40.00th=[ 791], 50.00th=[ 807], 60.00th=[ 832], 00:10:02.003 | 70.00th=[ 865], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:02.003 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:02.003 | 99.99th=[43254] 00:10:02.003 bw ( KiB/s): min= 96, max= 96, per=1.61%, avg=96.00, stdev= 0.00, samples=5 00:10:02.003 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:10:02.003 lat (usec) : 500=0.41%, 750=23.27%, 1000=50.20% 00:10:02.003 lat (msec) : 2=0.41%, 10=0.82%, 50=24.49% 00:10:02.003 cpu : usr=0.00%, sys=0.15%, ctx=249, majf=0, minf=2 00:10:02.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.004 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.004 issued rwts: total=245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.004 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1136749: Wed Nov 20 07:10:36 2024 00:10:02.004 read: IOPS=1063, BW=4251KiB/s (4353kB/s)(10.5MiB/2538msec) 00:10:02.004 slat (nsec): min=6362, max=64195, avg=25754.04, stdev=6835.21 00:10:02.004 clat (usec): min=237, max=42944, avg=900.98, stdev=2501.15 00:10:02.004 lat (usec): min=264, max=42972, avg=926.73, stdev=2501.20 00:10:02.004 clat percentiles (usec): 00:10:02.004 | 1.00th=[ 478], 5.00th=[ 537], 10.00th=[ 570], 20.00th=[ 635], 00:10:02.004 | 30.00th=[ 668], 40.00th=[ 717], 50.00th=[ 750], 60.00th=[ 799], 00:10:02.004 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 898], 95.00th=[ 955], 00:10:02.004 | 99.00th=[ 1057], 99.50th=[ 1123], 99.90th=[42206], 99.95th=[42206], 00:10:02.004 | 99.99th=[42730] 00:10:02.004 bw ( KiB/s): min= 2296, max= 5328, per=71.94%, avg=4288.00, stdev=1247.09, samples=5 00:10:02.004 iops : min= 574, max= 1332, avg=1072.00, stdev=311.77, samples=5 00:10:02.004 lat (usec) : 250=0.04%, 500=1.56%, 750=47.70%, 1000=47.44% 00:10:02.004 lat (msec) : 2=2.85%, 50=0.37% 00:10:02.004 cpu : usr=1.93%, sys=3.90%, ctx=2698, majf=0, minf=2 00:10:02.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.004 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.004 issued rwts: total=2698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.004 00:10:02.004 Run status group 0 (all jobs): 00:10:02.004 READ: bw=5960KiB/s (6103kB/s), 95.9KiB/s-4251KiB/s (98.2kB/s-4353kB/s), io=18.2MiB (19.1MB), run=2538-3128msec 00:10:02.004 00:10:02.004 Disk stats (read/write): 00:10:02.004 nvme0n1: ios=1634/0, merge=0/0, ticks=2662/0, in_queue=2662, util=92.55% 00:10:02.004 nvme0n2: ios=73/0, merge=0/0, ticks=2994/0, in_queue=2994, util=93.74% 00:10:02.004 nvme0n3: ios=60/0, merge=0/0, ticks=2481/0, in_queue=2481, util=95.46% 00:10:02.004 nvme0n4: ios=2697/0, merge=0/0, ticks=2185/0, in_queue=2185, util=96.37% 00:10:02.004 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.004 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:02.341 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.342 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:02.621 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.621 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:02.621 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.621 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:02.882 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:02.882 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1136301 00:10:02.882 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:02.882 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.882 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:02.882 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:02.882 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:02.882 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.882 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:02.882 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.883 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:02.883 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:02.883 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:02.883 nvmf hotplug test: fio failed as expected 00:10:02.883 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.144 rmmod nvme_tcp 00:10:03.144 rmmod nvme_fabrics 00:10:03.144 rmmod nvme_keyring 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1132754 ']' 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1132754 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 1132754 ']' 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 1132754 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:03.144 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1132754 00:10:03.405 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:03.405 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:03.405 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1132754' 00:10:03.405 killing process with pid 1132754 00:10:03.405 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 1132754 00:10:03.405 07:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 1132754 00:10:03.405 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.405 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.405 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.405 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:03.405 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.405 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:03.405 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.405 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.405 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.405 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.405 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.405 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.951 00:10:05.951 real 0m30.205s 00:10:05.951 user 2m30.416s 00:10:05.951 sys 0m10.090s 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.951 ************************************ 00:10:05.951 END TEST nvmf_fio_target 00:10:05.951 ************************************ 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.951 ************************************ 00:10:05.951 START TEST nvmf_bdevio 00:10:05.951 ************************************ 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:05.951 * Looking for test storage... 00:10:05.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:05.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.951 --rc genhtml_branch_coverage=1 00:10:05.951 --rc genhtml_function_coverage=1 00:10:05.951 --rc genhtml_legend=1 00:10:05.951 --rc geninfo_all_blocks=1 00:10:05.951 --rc geninfo_unexecuted_blocks=1 00:10:05.951 00:10:05.951 ' 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:05.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.951 --rc genhtml_branch_coverage=1 00:10:05.951 --rc genhtml_function_coverage=1 00:10:05.951 --rc genhtml_legend=1 00:10:05.951 --rc geninfo_all_blocks=1 00:10:05.951 --rc geninfo_unexecuted_blocks=1 00:10:05.951 00:10:05.951 ' 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:05.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.951 --rc genhtml_branch_coverage=1 00:10:05.951 --rc genhtml_function_coverage=1 00:10:05.951 --rc genhtml_legend=1 00:10:05.951 --rc geninfo_all_blocks=1 00:10:05.951 --rc geninfo_unexecuted_blocks=1 00:10:05.951 00:10:05.951 ' 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:05.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.951 --rc genhtml_branch_coverage=1 00:10:05.951 --rc genhtml_function_coverage=1 00:10:05.951 --rc genhtml_legend=1 00:10:05.951 --rc geninfo_all_blocks=1 00:10:05.951 --rc geninfo_unexecuted_blocks=1 00:10:05.951 00:10:05.951 ' 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:05.951 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.952 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:14.094 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:14.094 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.094 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:14.095 Found net devices under 0000:31:00.0: cvl_0_0 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:14.095 Found net devices under 0000:31:00.1: cvl_0_1 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:10:14.095 00:10:14.095 --- 10.0.0.2 ping statistics --- 00:10:14.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.095 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:10:14.095 00:10:14.095 --- 10.0.0.1 ping statistics --- 00:10:14.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.095 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1142330 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1142330 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 1142330 ']' 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:14.095 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:14.356 [2024-11-20 07:10:48.909915] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:10:14.356 [2024-11-20 07:10:48.909982] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.356 [2024-11-20 07:10:49.018957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.356 [2024-11-20 07:10:49.069323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.356 [2024-11-20 07:10:49.069379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.356 [2024-11-20 07:10:49.069387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.356 [2024-11-20 07:10:49.069395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.356 [2024-11-20 07:10:49.069401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.357 [2024-11-20 07:10:49.071441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:14.357 [2024-11-20 07:10:49.071600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:14.357 [2024-11-20 07:10:49.071760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.357 [2024-11-20 07:10:49.071760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.297 [2024-11-20 07:10:49.788090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.297 Malloc0 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.297 [2024-11-20 07:10:49.866946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.297 { 00:10:15.297 "params": { 00:10:15.297 "name": "Nvme$subsystem", 00:10:15.297 "trtype": "$TEST_TRANSPORT", 00:10:15.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.297 "adrfam": "ipv4", 00:10:15.297 "trsvcid": "$NVMF_PORT", 00:10:15.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.297 "hdgst": ${hdgst:-false}, 00:10:15.297 "ddgst": ${ddgst:-false} 00:10:15.297 }, 00:10:15.297 "method": "bdev_nvme_attach_controller" 00:10:15.297 } 00:10:15.297 EOF 00:10:15.297 )") 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:15.297 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.297 "params": { 00:10:15.297 "name": "Nvme1", 00:10:15.297 "trtype": "tcp", 00:10:15.297 "traddr": "10.0.0.2", 00:10:15.297 "adrfam": "ipv4", 00:10:15.297 "trsvcid": "4420", 00:10:15.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.297 "hdgst": false, 00:10:15.297 "ddgst": false 00:10:15.297 }, 00:10:15.297 "method": "bdev_nvme_attach_controller" 00:10:15.297 }' 00:10:15.297 [2024-11-20 07:10:49.935420] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:10:15.297 [2024-11-20 07:10:49.935488] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142529 ] 00:10:15.297 [2024-11-20 07:10:50.021595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.558 [2024-11-20 07:10:50.068527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.558 [2024-11-20 07:10:50.068638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.558 [2024-11-20 07:10:50.068635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.818 I/O targets: 00:10:15.818 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:15.818 00:10:15.818 00:10:15.818 CUnit - A unit testing framework for C - Version 2.1-3 00:10:15.818 http://cunit.sourceforge.net/ 00:10:15.818 00:10:15.818 00:10:15.818 Suite: bdevio tests on: Nvme1n1 00:10:15.818 Test: blockdev write read block ...passed 00:10:15.818 Test: blockdev write zeroes read block ...passed 00:10:15.818 Test: blockdev write zeroes read no split ...passed 00:10:15.818 Test: blockdev write zeroes read split ...passed 00:10:15.818 Test: blockdev write zeroes read split partial ...passed 00:10:15.818 Test: blockdev reset ...[2024-11-20 07:10:50.550248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:15.819 [2024-11-20 07:10:50.550321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b14b0 (9): Bad file descriptor 00:10:16.080 [2024-11-20 07:10:50.647472] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:16.080 passed 00:10:16.080 Test: blockdev write read 8 blocks ...passed 00:10:16.080 Test: blockdev write read size > 128k ...passed 00:10:16.080 Test: blockdev write read invalid size ...passed 00:10:16.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:16.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:16.080 Test: blockdev write read max offset ...passed 00:10:16.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:16.080 Test: blockdev writev readv 8 blocks ...passed 00:10:16.341 Test: blockdev writev readv 30 x 1block ...passed 00:10:16.341 Test: blockdev writev readv block ...passed 00:10:16.341 Test: blockdev writev readv size > 128k ...passed 00:10:16.341 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:16.341 Test: blockdev comparev and writev ...[2024-11-20 07:10:50.912419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.341 [2024-11-20 07:10:50.912448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:16.341 [2024-11-20 07:10:50.912460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.341 [2024-11-20 07:10:50.912466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:16.341 [2024-11-20 07:10:50.912889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.341 [2024-11-20 07:10:50.912898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:16.341 [2024-11-20 07:10:50.912908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.341 [2024-11-20 07:10:50.912914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:16.341 [2024-11-20 07:10:50.913369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.341 [2024-11-20 07:10:50.913378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:16.341 [2024-11-20 07:10:50.913388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.341 [2024-11-20 07:10:50.913393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:16.341 [2024-11-20 07:10:50.913821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.341 [2024-11-20 07:10:50.913828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:16.341 [2024-11-20 07:10:50.913837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.341 [2024-11-20 07:10:50.913843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:16.341 passed 00:10:16.341 Test: blockdev nvme passthru rw ...passed 00:10:16.341 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:10:50.998800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.341 [2024-11-20 07:10:50.998811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:16.342 [2024-11-20 07:10:50.999194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.342 [2024-11-20 07:10:50.999202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:16.342 [2024-11-20 07:10:50.999563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.342 [2024-11-20 07:10:50.999571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:16.342 [2024-11-20 07:10:50.999933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.342 [2024-11-20 07:10:50.999940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:16.342 passed 00:10:16.342 Test: blockdev nvme admin passthru ...passed 00:10:16.342 Test: blockdev copy ...passed 00:10:16.342 00:10:16.342 Run Summary: Type Total Ran Passed Failed Inactive 00:10:16.342 suites 1 1 n/a 0 0 00:10:16.342 tests 23 23 23 0 0 00:10:16.342 asserts 152 152 152 0 n/a 00:10:16.342 00:10:16.342 Elapsed time = 1.435 seconds 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.614 rmmod nvme_tcp 00:10:16.614 rmmod nvme_fabrics 00:10:16.614 rmmod nvme_keyring 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1142330 ']' 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1142330 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 1142330 ']' 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 1142330 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1142330 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1142330' 00:10:16.614 killing process with pid 1142330 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 1142330 00:10:16.614 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 1142330 00:10:16.876 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.876 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.876 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.876 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:16.876 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:16.876 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.876 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.876 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.876 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.876 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.876 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.876 07:10:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.791 07:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.791 00:10:18.791 real 0m13.275s 00:10:18.791 user 0m14.388s 00:10:18.791 sys 0m6.921s 00:10:18.791 07:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:18.791 07:10:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.791 ************************************ 00:10:18.791 END TEST nvmf_bdevio 00:10:18.791 ************************************ 00:10:18.791 07:10:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:18.791 00:10:18.791 real 5m15.045s 00:10:18.791 user 11m47.594s 00:10:18.791 sys 1m59.434s 00:10:18.791 07:10:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:18.791 07:10:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.791 ************************************ 00:10:18.791 END TEST nvmf_target_core 00:10:18.791 ************************************ 00:10:19.053 07:10:53 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:19.053 07:10:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:19.053 07:10:53 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:19.053 07:10:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:19.053 ************************************ 00:10:19.053 START TEST nvmf_target_extra 00:10:19.053 ************************************ 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:19.053 * Looking for test storage... 00:10:19.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:19.053 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:19.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.314 --rc genhtml_branch_coverage=1 00:10:19.314 --rc genhtml_function_coverage=1 00:10:19.314 --rc genhtml_legend=1 00:10:19.314 --rc geninfo_all_blocks=1 00:10:19.314 --rc geninfo_unexecuted_blocks=1 00:10:19.314 00:10:19.314 ' 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:19.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.314 --rc genhtml_branch_coverage=1 00:10:19.314 --rc genhtml_function_coverage=1 00:10:19.314 --rc genhtml_legend=1 00:10:19.314 --rc geninfo_all_blocks=1 00:10:19.314 --rc geninfo_unexecuted_blocks=1 00:10:19.314 00:10:19.314 ' 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:19.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.314 --rc genhtml_branch_coverage=1 00:10:19.314 --rc genhtml_function_coverage=1 00:10:19.314 --rc genhtml_legend=1 00:10:19.314 --rc geninfo_all_blocks=1 00:10:19.314 --rc geninfo_unexecuted_blocks=1 00:10:19.314 00:10:19.314 ' 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:19.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.314 --rc genhtml_branch_coverage=1 00:10:19.314 --rc genhtml_function_coverage=1 00:10:19.314 --rc genhtml_legend=1 00:10:19.314 --rc geninfo_all_blocks=1 00:10:19.314 --rc geninfo_unexecuted_blocks=1 00:10:19.314 00:10:19.314 ' 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.314 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.315 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:19.315 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:19.315 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:19.315 07:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:19.315 07:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:19.315 07:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:19.315 07:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:19.315 ************************************ 00:10:19.315 START TEST nvmf_example 00:10:19.315 ************************************ 00:10:19.315 07:10:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:19.315 * Looking for test storage... 00:10:19.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.315 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:19.315 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:19.315 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.576 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:19.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.576 --rc genhtml_branch_coverage=1 00:10:19.576 --rc genhtml_function_coverage=1 00:10:19.577 --rc genhtml_legend=1 00:10:19.577 --rc geninfo_all_blocks=1 00:10:19.577 --rc geninfo_unexecuted_blocks=1 00:10:19.577 00:10:19.577 ' 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:19.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.577 --rc genhtml_branch_coverage=1 00:10:19.577 --rc genhtml_function_coverage=1 00:10:19.577 --rc genhtml_legend=1 00:10:19.577 --rc geninfo_all_blocks=1 00:10:19.577 --rc geninfo_unexecuted_blocks=1 00:10:19.577 00:10:19.577 ' 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:19.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.577 --rc genhtml_branch_coverage=1 00:10:19.577 --rc genhtml_function_coverage=1 00:10:19.577 --rc genhtml_legend=1 00:10:19.577 --rc geninfo_all_blocks=1 00:10:19.577 --rc geninfo_unexecuted_blocks=1 00:10:19.577 00:10:19.577 ' 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:19.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.577 --rc genhtml_branch_coverage=1 00:10:19.577 --rc genhtml_function_coverage=1 00:10:19.577 --rc genhtml_legend=1 00:10:19.577 --rc geninfo_all_blocks=1 00:10:19.577 --rc geninfo_unexecuted_blocks=1 00:10:19.577 00:10:19.577 ' 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:19.577 07:10:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:27.726 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:27.726 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:27.726 Found net devices under 0000:31:00.0: cvl_0_0 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:27.726 Found net devices under 0000:31:00.1: cvl_0_1 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:27.726 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:27.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.723 ms 00:10:27.727 00:10:27.727 --- 10.0.0.2 ping statistics --- 00:10:27.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.727 rtt min/avg/max/mdev = 0.723/0.723/0.723/0.000 ms 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:10:27.727 00:10:27.727 --- 10.0.0.1 ping statistics --- 00:10:27.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.727 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1147621 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1147621 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 1147621 ']' 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:27.727 07:11:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.668 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.930 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.930 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:28.930 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.930 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.930 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.930 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.930 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.930 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.930 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.930 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.930 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:28.930 07:11:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:41.169 Initializing NVMe Controllers 00:10:41.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:41.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:41.169 Initialization complete. Launching workers. 00:10:41.169 ======================================================== 00:10:41.169 Latency(us) 00:10:41.169 Device Information : IOPS MiB/s Average min max 00:10:41.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18082.59 70.64 3539.83 873.06 15725.52 00:10:41.169 ======================================================== 00:10:41.169 Total : 18082.59 70.64 3539.83 873.06 15725.52 00:10:41.169 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.169 rmmod nvme_tcp 00:10:41.169 rmmod nvme_fabrics 00:10:41.169 rmmod nvme_keyring 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1147621 ']' 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1147621 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 1147621 ']' 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 1147621 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1147621 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1147621' 00:10:41.169 killing process with pid 1147621 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 1147621 00:10:41.169 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 1147621 00:10:41.170 nvmf threads initialize successfully 00:10:41.170 bdev subsystem init successfully 00:10:41.170 created a nvmf target service 00:10:41.170 create targets's poll groups done 00:10:41.170 all subsystems of target started 00:10:41.170 nvmf target is running 00:10:41.170 all subsystems of target stopped 00:10:41.170 destroy targets's poll groups done 00:10:41.170 destroyed the nvmf target service 00:10:41.170 bdev subsystem finish successfully 00:10:41.170 nvmf threads destroy successfully 00:10:41.170 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.170 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.170 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.170 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:41.170 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.170 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:41.170 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.170 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.170 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.170 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.170 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.170 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.431 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:41.431 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:41.431 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.431 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.431 00:10:41.431 real 0m22.252s 00:10:41.431 user 0m47.382s 00:10:41.431 sys 0m7.400s 00:10:41.431 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:41.431 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.431 ************************************ 00:10:41.431 END TEST nvmf_example 00:10:41.431 ************************************ 00:10:41.431 07:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:41.431 07:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:41.431 07:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:41.431 07:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:41.693 ************************************ 00:10:41.693 START TEST nvmf_filesystem 00:10:41.693 ************************************ 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:41.694 * Looking for test storage... 00:10:41.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:41.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.694 --rc genhtml_branch_coverage=1 00:10:41.694 --rc genhtml_function_coverage=1 00:10:41.694 --rc genhtml_legend=1 00:10:41.694 --rc geninfo_all_blocks=1 00:10:41.694 --rc geninfo_unexecuted_blocks=1 00:10:41.694 00:10:41.694 ' 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:41.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.694 --rc genhtml_branch_coverage=1 00:10:41.694 --rc genhtml_function_coverage=1 00:10:41.694 --rc genhtml_legend=1 00:10:41.694 --rc geninfo_all_blocks=1 00:10:41.694 --rc geninfo_unexecuted_blocks=1 00:10:41.694 00:10:41.694 ' 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:41.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.694 --rc genhtml_branch_coverage=1 00:10:41.694 --rc genhtml_function_coverage=1 00:10:41.694 --rc genhtml_legend=1 00:10:41.694 --rc geninfo_all_blocks=1 00:10:41.694 --rc geninfo_unexecuted_blocks=1 00:10:41.694 00:10:41.694 ' 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:41.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.694 --rc genhtml_branch_coverage=1 00:10:41.694 --rc genhtml_function_coverage=1 00:10:41.694 --rc genhtml_legend=1 00:10:41.694 --rc geninfo_all_blocks=1 00:10:41.694 --rc geninfo_unexecuted_blocks=1 00:10:41.694 00:10:41.694 ' 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:41.694 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:41.695 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:41.960 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:41.961 #define SPDK_CONFIG_H 00:10:41.961 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:41.961 #define SPDK_CONFIG_APPS 1 00:10:41.961 #define SPDK_CONFIG_ARCH native 00:10:41.961 #undef SPDK_CONFIG_ASAN 00:10:41.961 #undef SPDK_CONFIG_AVAHI 00:10:41.961 #undef SPDK_CONFIG_CET 00:10:41.961 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:41.961 #define SPDK_CONFIG_COVERAGE 1 00:10:41.961 #define SPDK_CONFIG_CROSS_PREFIX 00:10:41.961 #undef SPDK_CONFIG_CRYPTO 00:10:41.961 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:41.961 #undef SPDK_CONFIG_CUSTOMOCF 00:10:41.961 #undef SPDK_CONFIG_DAOS 00:10:41.961 #define SPDK_CONFIG_DAOS_DIR 00:10:41.961 #define SPDK_CONFIG_DEBUG 1 00:10:41.961 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:41.961 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:41.961 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:41.961 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:41.961 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:41.961 #undef SPDK_CONFIG_DPDK_UADK 00:10:41.961 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:41.961 #define SPDK_CONFIG_EXAMPLES 1 00:10:41.961 #undef SPDK_CONFIG_FC 00:10:41.961 #define SPDK_CONFIG_FC_PATH 00:10:41.961 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:41.961 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:41.961 #define SPDK_CONFIG_FSDEV 1 00:10:41.961 #undef SPDK_CONFIG_FUSE 00:10:41.961 #undef SPDK_CONFIG_FUZZER 00:10:41.961 #define SPDK_CONFIG_FUZZER_LIB 00:10:41.961 #undef SPDK_CONFIG_GOLANG 00:10:41.961 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:41.961 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:41.961 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:41.961 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:41.961 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:41.961 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:41.961 #undef SPDK_CONFIG_HAVE_LZ4 00:10:41.961 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:41.961 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:41.961 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:41.961 #define SPDK_CONFIG_IDXD 1 00:10:41.961 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:41.961 #undef SPDK_CONFIG_IPSEC_MB 00:10:41.961 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:41.961 #define SPDK_CONFIG_ISAL 1 00:10:41.961 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:41.961 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:41.961 #define SPDK_CONFIG_LIBDIR 00:10:41.961 #undef SPDK_CONFIG_LTO 00:10:41.961 #define SPDK_CONFIG_MAX_LCORES 128 00:10:41.961 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:41.961 #define SPDK_CONFIG_NVME_CUSE 1 00:10:41.961 #undef SPDK_CONFIG_OCF 00:10:41.961 #define SPDK_CONFIG_OCF_PATH 00:10:41.961 #define SPDK_CONFIG_OPENSSL_PATH 00:10:41.961 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:41.961 #define SPDK_CONFIG_PGO_DIR 00:10:41.961 #undef SPDK_CONFIG_PGO_USE 00:10:41.961 #define SPDK_CONFIG_PREFIX /usr/local 00:10:41.961 #undef SPDK_CONFIG_RAID5F 00:10:41.961 #undef SPDK_CONFIG_RBD 00:10:41.961 #define SPDK_CONFIG_RDMA 1 00:10:41.961 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:41.961 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:41.961 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:41.961 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:41.961 #define SPDK_CONFIG_SHARED 1 00:10:41.961 #undef SPDK_CONFIG_SMA 00:10:41.961 #define SPDK_CONFIG_TESTS 1 00:10:41.961 #undef SPDK_CONFIG_TSAN 00:10:41.961 #define SPDK_CONFIG_UBLK 1 00:10:41.961 #define SPDK_CONFIG_UBSAN 1 00:10:41.961 #undef SPDK_CONFIG_UNIT_TESTS 00:10:41.961 #undef SPDK_CONFIG_URING 00:10:41.961 #define SPDK_CONFIG_URING_PATH 00:10:41.961 #undef SPDK_CONFIG_URING_ZNS 00:10:41.961 #undef SPDK_CONFIG_USDT 00:10:41.961 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:41.961 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:41.961 #define SPDK_CONFIG_VFIO_USER 1 00:10:41.961 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:41.961 #define SPDK_CONFIG_VHOST 1 00:10:41.961 #define SPDK_CONFIG_VIRTIO 1 00:10:41.961 #undef SPDK_CONFIG_VTUNE 00:10:41.961 #define SPDK_CONFIG_VTUNE_DIR 00:10:41.961 #define SPDK_CONFIG_WERROR 1 00:10:41.961 #define SPDK_CONFIG_WPDK_DIR 00:10:41.961 #undef SPDK_CONFIG_XNVME 00:10:41.961 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:41.961 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:41.962 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1150422 ]] 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1150422 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:41.963 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.X8hGu6 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.X8hGu6/tests/target /tmp/spdk.X8hGu6 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122336174080 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356550144 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=7020376064 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666906624 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678273024 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847697408 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23613440 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=175104 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=328704 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677675008 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678277120 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=602112 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:41.964 * Looking for test storage... 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122336174080 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9234968576 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:41.964 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:41.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.965 --rc genhtml_branch_coverage=1 00:10:41.965 --rc genhtml_function_coverage=1 00:10:41.965 --rc genhtml_legend=1 00:10:41.965 --rc geninfo_all_blocks=1 00:10:41.965 --rc geninfo_unexecuted_blocks=1 00:10:41.965 00:10:41.965 ' 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:41.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.965 --rc genhtml_branch_coverage=1 00:10:41.965 --rc genhtml_function_coverage=1 00:10:41.965 --rc genhtml_legend=1 00:10:41.965 --rc geninfo_all_blocks=1 00:10:41.965 --rc geninfo_unexecuted_blocks=1 00:10:41.965 00:10:41.965 ' 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:41.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.965 --rc genhtml_branch_coverage=1 00:10:41.965 --rc genhtml_function_coverage=1 00:10:41.965 --rc genhtml_legend=1 00:10:41.965 --rc geninfo_all_blocks=1 00:10:41.965 --rc geninfo_unexecuted_blocks=1 00:10:41.965 00:10:41.965 ' 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:41.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.965 --rc genhtml_branch_coverage=1 00:10:41.965 --rc genhtml_function_coverage=1 00:10:41.965 --rc genhtml_legend=1 00:10:41.965 --rc geninfo_all_blocks=1 00:10:41.965 --rc geninfo_unexecuted_blocks=1 00:10:41.965 00:10:41.965 ' 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.965 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:41.966 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:50.111 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.111 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:50.112 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:50.112 Found net devices under 0000:31:00.0: cvl_0_0 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:50.112 Found net devices under 0000:31:00.1: cvl_0_1 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.112 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.372 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.372 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.372 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.372 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.372 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.372 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.372 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:10:50.632 00:10:50.632 --- 10.0.0.2 ping statistics --- 00:10:50.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.632 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:10:50.632 00:10:50.632 --- 10.0.0.1 ping statistics --- 00:10:50.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.632 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.632 ************************************ 00:10:50.632 START TEST nvmf_filesystem_no_in_capsule 00:10:50.632 ************************************ 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1154753 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1154753 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 1154753 ']' 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:50.632 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.632 [2024-11-20 07:11:25.315061] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:10:50.632 [2024-11-20 07:11:25.315152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.892 [2024-11-20 07:11:25.413299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.892 [2024-11-20 07:11:25.456095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.892 [2024-11-20 07:11:25.456134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.892 [2024-11-20 07:11:25.456142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.892 [2024-11-20 07:11:25.456149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.892 [2024-11-20 07:11:25.456155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.892 [2024-11-20 07:11:25.457809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.892 [2024-11-20 07:11:25.457956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.892 [2024-11-20 07:11:25.458042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.892 [2024-11-20 07:11:25.458042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.462 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:51.462 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:51.462 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:51.462 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:51.463 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.463 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.463 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:51.463 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:51.463 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.463 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.463 [2024-11-20 07:11:26.170684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.463 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.463 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:51.463 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.463 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.723 Malloc1 00:10:51.723 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.723 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:51.723 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.723 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.723 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.723 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:51.723 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.723 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.723 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.723 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.723 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.723 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.723 [2024-11-20 07:11:26.309930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.723 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:51.724 { 00:10:51.724 "name": "Malloc1", 00:10:51.724 "aliases": [ 00:10:51.724 "150a0747-a349-4c9e-a214-69c72b338dbe" 00:10:51.724 ], 00:10:51.724 "product_name": "Malloc disk", 00:10:51.724 "block_size": 512, 00:10:51.724 "num_blocks": 1048576, 00:10:51.724 "uuid": "150a0747-a349-4c9e-a214-69c72b338dbe", 00:10:51.724 "assigned_rate_limits": { 00:10:51.724 "rw_ios_per_sec": 0, 00:10:51.724 "rw_mbytes_per_sec": 0, 00:10:51.724 "r_mbytes_per_sec": 0, 00:10:51.724 "w_mbytes_per_sec": 0 00:10:51.724 }, 00:10:51.724 "claimed": true, 00:10:51.724 "claim_type": "exclusive_write", 00:10:51.724 "zoned": false, 00:10:51.724 "supported_io_types": { 00:10:51.724 "read": true, 00:10:51.724 "write": true, 00:10:51.724 "unmap": true, 00:10:51.724 "flush": true, 00:10:51.724 "reset": true, 00:10:51.724 "nvme_admin": false, 00:10:51.724 "nvme_io": false, 00:10:51.724 "nvme_io_md": false, 00:10:51.724 "write_zeroes": true, 00:10:51.724 "zcopy": true, 00:10:51.724 "get_zone_info": false, 00:10:51.724 "zone_management": false, 00:10:51.724 "zone_append": false, 00:10:51.724 "compare": false, 00:10:51.724 "compare_and_write": false, 00:10:51.724 "abort": true, 00:10:51.724 "seek_hole": false, 00:10:51.724 "seek_data": false, 00:10:51.724 "copy": true, 00:10:51.724 "nvme_iov_md": false 00:10:51.724 }, 00:10:51.724 "memory_domains": [ 00:10:51.724 { 00:10:51.724 "dma_device_id": "system", 00:10:51.724 "dma_device_type": 1 00:10:51.724 }, 00:10:51.724 { 00:10:51.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.724 "dma_device_type": 2 00:10:51.724 } 00:10:51.724 ], 00:10:51.724 "driver_specific": {} 00:10:51.724 } 00:10:51.724 ]' 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:51.724 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.639 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.639 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:53.639 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.639 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:53.639 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:55.554 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:56.498 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.442 ************************************ 00:10:57.442 START TEST filesystem_ext4 00:10:57.442 ************************************ 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:57.442 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:57.442 mke2fs 1.47.0 (5-Feb-2023) 00:10:57.442 Discarding device blocks: 0/522240 done 00:10:57.442 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:57.442 Filesystem UUID: faac7b2f-df13-468d-b12b-5b3b7b5e3af9 00:10:57.442 Superblock backups stored on blocks: 00:10:57.442 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:57.442 00:10:57.442 Allocating group tables: 0/64 done 00:10:57.442 Writing inode tables: 0/64 done 00:10:57.704 Creating journal (8192 blocks): done 00:11:00.006 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:11:00.006 00:11:00.006 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:00.006 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1154753 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:06.595 00:11:06.595 real 0m8.378s 00:11:06.595 user 0m0.027s 00:11:06.595 sys 0m0.083s 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:06.595 ************************************ 00:11:06.595 END TEST filesystem_ext4 00:11:06.595 ************************************ 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.595 ************************************ 00:11:06.595 START TEST filesystem_btrfs 00:11:06.595 ************************************ 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:06.595 btrfs-progs v6.8.1 00:11:06.595 See https://btrfs.readthedocs.io for more information. 00:11:06.595 00:11:06.595 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:06.595 NOTE: several default settings have changed in version 5.15, please make sure 00:11:06.595 this does not affect your deployments: 00:11:06.595 - DUP for metadata (-m dup) 00:11:06.595 - enabled no-holes (-O no-holes) 00:11:06.595 - enabled free-space-tree (-R free-space-tree) 00:11:06.595 00:11:06.595 Label: (null) 00:11:06.595 UUID: 5abf321d-090a-4e2c-ac4c-23db772df2da 00:11:06.595 Node size: 16384 00:11:06.595 Sector size: 4096 (CPU page size: 4096) 00:11:06.595 Filesystem size: 510.00MiB 00:11:06.595 Block group profiles: 00:11:06.595 Data: single 8.00MiB 00:11:06.595 Metadata: DUP 32.00MiB 00:11:06.595 System: DUP 8.00MiB 00:11:06.595 SSD detected: yes 00:11:06.595 Zoned device: no 00:11:06.595 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:06.595 Checksum: crc32c 00:11:06.595 Number of devices: 1 00:11:06.595 Devices: 00:11:06.595 ID SIZE PATH 00:11:06.595 1 510.00MiB /dev/nvme0n1p1 00:11:06.595 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:06.595 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:06.595 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:06.595 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:06.595 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:06.595 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:06.595 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:06.595 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:06.595 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1154753 00:11:06.595 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:06.595 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:06.595 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:06.595 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:06.595 00:11:06.595 real 0m0.732s 00:11:06.595 user 0m0.026s 00:11:06.595 sys 0m0.127s 00:11:06.595 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.595 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:06.595 ************************************ 00:11:06.595 END TEST filesystem_btrfs 00:11:06.595 ************************************ 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.596 ************************************ 00:11:06.596 START TEST filesystem_xfs 00:11:06.596 ************************************ 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:06.596 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:06.596 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:06.596 = sectsz=512 attr=2, projid32bit=1 00:11:06.596 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:06.596 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:06.596 data = bsize=4096 blocks=130560, imaxpct=25 00:11:06.596 = sunit=0 swidth=0 blks 00:11:06.596 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:06.596 log =internal log bsize=4096 blocks=16384, version=2 00:11:06.596 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:06.596 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:07.983 Discarding blocks...Done. 00:11:07.983 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:07.983 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1154753 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.530 00:11:10.530 real 0m3.704s 00:11:10.530 user 0m0.035s 00:11:10.530 sys 0m0.075s 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.530 ************************************ 00:11:10.530 END TEST filesystem_xfs 00:11:10.530 ************************************ 00:11:10.530 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1154753 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 1154753 ']' 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 1154753 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1154753 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1154753' 00:11:10.530 killing process with pid 1154753 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 1154753 00:11:10.530 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 1154753 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:10.791 00:11:10.791 real 0m20.190s 00:11:10.791 user 1m19.749s 00:11:10.791 sys 0m1.497s 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.791 ************************************ 00:11:10.791 END TEST nvmf_filesystem_no_in_capsule 00:11:10.791 ************************************ 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:10.791 ************************************ 00:11:10.791 START TEST nvmf_filesystem_in_capsule 00:11:10.791 ************************************ 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.791 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1159025 00:11:10.792 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1159025 00:11:10.792 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:10.792 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 1159025 ']' 00:11:10.792 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.792 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:10.792 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.792 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:10.792 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.052 [2024-11-20 07:11:45.579842] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:11:11.052 [2024-11-20 07:11:45.579902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.052 [2024-11-20 07:11:45.665548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.052 [2024-11-20 07:11:45.702493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.052 [2024-11-20 07:11:45.702523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.052 [2024-11-20 07:11:45.702531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.052 [2024-11-20 07:11:45.702538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.052 [2024-11-20 07:11:45.702544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.052 [2024-11-20 07:11:45.704228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.052 [2024-11-20 07:11:45.704346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.052 [2024-11-20 07:11:45.704502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.052 [2024-11-20 07:11:45.704502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.623 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:11.623 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:11.623 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.623 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:11.623 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.885 [2024-11-20 07:11:46.411273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.885 Malloc1 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.885 [2024-11-20 07:11:46.543901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:11.885 { 00:11:11.885 "name": "Malloc1", 00:11:11.885 "aliases": [ 00:11:11.885 "542547fa-af50-4cef-a8f4-bccf8b2ea4cb" 00:11:11.885 ], 00:11:11.885 "product_name": "Malloc disk", 00:11:11.885 "block_size": 512, 00:11:11.885 "num_blocks": 1048576, 00:11:11.885 "uuid": "542547fa-af50-4cef-a8f4-bccf8b2ea4cb", 00:11:11.885 "assigned_rate_limits": { 00:11:11.885 "rw_ios_per_sec": 0, 00:11:11.885 "rw_mbytes_per_sec": 0, 00:11:11.885 "r_mbytes_per_sec": 0, 00:11:11.885 "w_mbytes_per_sec": 0 00:11:11.885 }, 00:11:11.885 "claimed": true, 00:11:11.885 "claim_type": "exclusive_write", 00:11:11.885 "zoned": false, 00:11:11.885 "supported_io_types": { 00:11:11.885 "read": true, 00:11:11.885 "write": true, 00:11:11.885 "unmap": true, 00:11:11.885 "flush": true, 00:11:11.885 "reset": true, 00:11:11.885 "nvme_admin": false, 00:11:11.885 "nvme_io": false, 00:11:11.885 "nvme_io_md": false, 00:11:11.885 "write_zeroes": true, 00:11:11.885 "zcopy": true, 00:11:11.885 "get_zone_info": false, 00:11:11.885 "zone_management": false, 00:11:11.885 "zone_append": false, 00:11:11.885 "compare": false, 00:11:11.885 "compare_and_write": false, 00:11:11.885 "abort": true, 00:11:11.885 "seek_hole": false, 00:11:11.885 "seek_data": false, 00:11:11.885 "copy": true, 00:11:11.885 "nvme_iov_md": false 00:11:11.885 }, 00:11:11.885 "memory_domains": [ 00:11:11.885 { 00:11:11.885 "dma_device_id": "system", 00:11:11.885 "dma_device_type": 1 00:11:11.885 }, 00:11:11.885 { 00:11:11.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.885 "dma_device_type": 2 00:11:11.885 } 00:11:11.885 ], 00:11:11.885 "driver_specific": {} 00:11:11.885 } 00:11:11.885 ]' 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:11.885 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:12.145 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:12.145 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:12.145 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:12.145 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:12.145 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.529 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.529 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:13.529 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.529 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:13.529 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:15.444 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:15.705 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:16.274 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:17.214 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:17.214 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:17.214 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:17.214 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:17.214 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.474 ************************************ 00:11:17.474 START TEST filesystem_in_capsule_ext4 00:11:17.474 ************************************ 00:11:17.474 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:17.474 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:17.474 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:17.474 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:17.474 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:17.474 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:17.474 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:17.475 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:17.475 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:17.475 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:17.475 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:17.475 mke2fs 1.47.0 (5-Feb-2023) 00:11:17.475 Discarding device blocks: 0/522240 done 00:11:17.475 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:17.475 Filesystem UUID: 834b434b-ea18-4845-8cdd-1c4cdeb0c301 00:11:17.475 Superblock backups stored on blocks: 00:11:17.475 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:17.475 00:11:17.475 Allocating group tables: 0/64 done 00:11:17.475 Writing inode tables: 0/64 done 00:11:17.735 Creating journal (8192 blocks): done 00:11:19.947 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:11:19.947 00:11:19.947 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:19.947 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.232 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.232 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:25.232 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.232 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:25.232 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:25.232 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.232 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1159025 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.233 00:11:25.233 real 0m7.904s 00:11:25.233 user 0m0.037s 00:11:25.233 sys 0m0.073s 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:25.233 ************************************ 00:11:25.233 END TEST filesystem_in_capsule_ext4 00:11:25.233 ************************************ 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.233 ************************************ 00:11:25.233 START TEST filesystem_in_capsule_btrfs 00:11:25.233 ************************************ 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:25.233 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:25.804 btrfs-progs v6.8.1 00:11:25.804 See https://btrfs.readthedocs.io for more information. 00:11:25.804 00:11:25.804 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:25.804 NOTE: several default settings have changed in version 5.15, please make sure 00:11:25.804 this does not affect your deployments: 00:11:25.804 - DUP for metadata (-m dup) 00:11:25.804 - enabled no-holes (-O no-holes) 00:11:25.804 - enabled free-space-tree (-R free-space-tree) 00:11:25.804 00:11:25.804 Label: (null) 00:11:25.804 UUID: c553ee85-8484-4363-a68b-0da4311ff289 00:11:25.804 Node size: 16384 00:11:25.804 Sector size: 4096 (CPU page size: 4096) 00:11:25.804 Filesystem size: 510.00MiB 00:11:25.804 Block group profiles: 00:11:25.804 Data: single 8.00MiB 00:11:25.804 Metadata: DUP 32.00MiB 00:11:25.804 System: DUP 8.00MiB 00:11:25.804 SSD detected: yes 00:11:25.804 Zoned device: no 00:11:25.804 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:25.804 Checksum: crc32c 00:11:25.804 Number of devices: 1 00:11:25.804 Devices: 00:11:25.804 ID SIZE PATH 00:11:25.804 1 510.00MiB /dev/nvme0n1p1 00:11:25.804 00:11:25.804 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:25.804 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:26.373 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:26.373 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1159025 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:26.374 00:11:26.374 real 0m0.950s 00:11:26.374 user 0m0.029s 00:11:26.374 sys 0m0.125s 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:26.374 ************************************ 00:11:26.374 END TEST filesystem_in_capsule_btrfs 00:11:26.374 ************************************ 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:26.374 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.374 ************************************ 00:11:26.374 START TEST filesystem_in_capsule_xfs 00:11:26.374 ************************************ 00:11:26.374 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:26.374 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:26.374 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:26.374 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:26.374 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:26.374 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:26.374 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:26.374 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:26.374 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:26.374 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:26.374 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:26.374 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:26.374 = sectsz=512 attr=2, projid32bit=1 00:11:26.374 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:26.374 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:26.374 data = bsize=4096 blocks=130560, imaxpct=25 00:11:26.374 = sunit=0 swidth=0 blks 00:11:26.374 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:26.374 log =internal log bsize=4096 blocks=16384, version=2 00:11:26.374 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:26.374 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:27.314 Discarding blocks...Done. 00:11:27.314 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:27.314 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1159025 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:29.224 00:11:29.224 real 0m2.802s 00:11:29.224 user 0m0.027s 00:11:29.224 sys 0m0.079s 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:29.224 ************************************ 00:11:29.224 END TEST filesystem_in_capsule_xfs 00:11:29.224 ************************************ 00:11:29.224 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:29.484 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:29.484 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1159025 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 1159025 ']' 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 1159025 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1159025 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1159025' 00:11:29.744 killing process with pid 1159025 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 1159025 00:11:29.744 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 1159025 00:11:30.004 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:30.004 00:11:30.004 real 0m19.075s 00:11:30.004 user 1m15.378s 00:11:30.004 sys 0m1.459s 00:11:30.004 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.004 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.004 ************************************ 00:11:30.004 END TEST nvmf_filesystem_in_capsule 00:11:30.004 ************************************ 00:11:30.004 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:30.004 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:30.004 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:30.004 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:30.004 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:30.004 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:30.004 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:30.004 rmmod nvme_tcp 00:11:30.004 rmmod nvme_fabrics 00:11:30.004 rmmod nvme_keyring 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.005 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.548 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:32.548 00:11:32.548 real 0m50.564s 00:11:32.548 user 2m37.784s 00:11:32.548 sys 0m9.542s 00:11:32.548 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:32.548 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.548 ************************************ 00:11:32.548 END TEST nvmf_filesystem 00:11:32.548 ************************************ 00:11:32.548 07:12:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:32.548 07:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:32.548 07:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:32.548 07:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.548 ************************************ 00:11:32.548 START TEST nvmf_target_discovery 00:11:32.548 ************************************ 00:11:32.548 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:32.548 * Looking for test storage... 00:11:32.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.548 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:32.548 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:32.548 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:32.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.548 --rc genhtml_branch_coverage=1 00:11:32.548 --rc genhtml_function_coverage=1 00:11:32.548 --rc genhtml_legend=1 00:11:32.548 --rc geninfo_all_blocks=1 00:11:32.548 --rc geninfo_unexecuted_blocks=1 00:11:32.548 00:11:32.548 ' 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:32.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.548 --rc genhtml_branch_coverage=1 00:11:32.548 --rc genhtml_function_coverage=1 00:11:32.548 --rc genhtml_legend=1 00:11:32.548 --rc geninfo_all_blocks=1 00:11:32.548 --rc geninfo_unexecuted_blocks=1 00:11:32.548 00:11:32.548 ' 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:32.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.548 --rc genhtml_branch_coverage=1 00:11:32.548 --rc genhtml_function_coverage=1 00:11:32.548 --rc genhtml_legend=1 00:11:32.548 --rc geninfo_all_blocks=1 00:11:32.548 --rc geninfo_unexecuted_blocks=1 00:11:32.548 00:11:32.548 ' 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:32.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.548 --rc genhtml_branch_coverage=1 00:11:32.548 --rc genhtml_function_coverage=1 00:11:32.548 --rc genhtml_legend=1 00:11:32.548 --rc geninfo_all_blocks=1 00:11:32.548 --rc geninfo_unexecuted_blocks=1 00:11:32.548 00:11:32.548 ' 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.548 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:32.549 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.683 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:40.684 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:40.684 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:40.684 Found net devices under 0000:31:00.0: cvl_0_0 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:40.684 Found net devices under 0000:31:00.1: cvl_0_1 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.684 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:40.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:11:40.945 00:11:40.945 --- 10.0.0.2 ping statistics --- 00:11:40.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.945 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:11:40.945 00:11:40.945 --- 10.0.0.1 ping statistics --- 00:11:40.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.945 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1168226 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1168226 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 1168226 ']' 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:40.945 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.205 [2024-11-20 07:12:15.717986] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:11:41.205 [2024-11-20 07:12:15.718038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.205 [2024-11-20 07:12:15.804158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.205 [2024-11-20 07:12:15.840087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.205 [2024-11-20 07:12:15.840120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.205 [2024-11-20 07:12:15.840128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.205 [2024-11-20 07:12:15.840134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.205 [2024-11-20 07:12:15.840140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.205 [2024-11-20 07:12:15.841630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.205 [2024-11-20 07:12:15.841744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.205 [2024-11-20 07:12:15.841941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.205 [2024-11-20 07:12:15.842139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.775 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:41.775 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:41.775 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:41.775 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:41.775 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.036 [2024-11-20 07:12:16.565147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.036 Null1 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.036 [2024-11-20 07:12:16.625484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.036 Null2 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.036 Null3 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.036 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.037 Null4 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.037 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:11:42.297 00:11:42.297 Discovery Log Number of Records 6, Generation counter 6 00:11:42.297 =====Discovery Log Entry 0====== 00:11:42.297 trtype: tcp 00:11:42.297 adrfam: ipv4 00:11:42.297 subtype: current discovery subsystem 00:11:42.297 treq: not required 00:11:42.297 portid: 0 00:11:42.297 trsvcid: 4420 00:11:42.297 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:42.297 traddr: 10.0.0.2 00:11:42.297 eflags: explicit discovery connections, duplicate discovery information 00:11:42.297 sectype: none 00:11:42.297 =====Discovery Log Entry 1====== 00:11:42.297 trtype: tcp 00:11:42.297 adrfam: ipv4 00:11:42.297 subtype: nvme subsystem 00:11:42.297 treq: not required 00:11:42.297 portid: 0 00:11:42.297 trsvcid: 4420 00:11:42.297 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:42.297 traddr: 10.0.0.2 00:11:42.297 eflags: none 00:11:42.297 sectype: none 00:11:42.297 =====Discovery Log Entry 2====== 00:11:42.297 trtype: tcp 00:11:42.297 adrfam: ipv4 00:11:42.297 subtype: nvme subsystem 00:11:42.297 treq: not required 00:11:42.297 portid: 0 00:11:42.297 trsvcid: 4420 00:11:42.297 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:42.297 traddr: 10.0.0.2 00:11:42.297 eflags: none 00:11:42.297 sectype: none 00:11:42.297 =====Discovery Log Entry 3====== 00:11:42.297 trtype: tcp 00:11:42.297 adrfam: ipv4 00:11:42.297 subtype: nvme subsystem 00:11:42.297 treq: not required 00:11:42.297 portid: 0 00:11:42.297 trsvcid: 4420 00:11:42.297 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:42.297 traddr: 10.0.0.2 00:11:42.297 eflags: none 00:11:42.297 sectype: none 00:11:42.297 =====Discovery Log Entry 4====== 00:11:42.297 trtype: tcp 00:11:42.297 adrfam: ipv4 00:11:42.297 subtype: nvme subsystem 00:11:42.297 treq: not required 00:11:42.297 portid: 0 00:11:42.297 trsvcid: 4420 00:11:42.297 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:42.297 traddr: 10.0.0.2 00:11:42.297 eflags: none 00:11:42.297 sectype: none 00:11:42.297 =====Discovery Log Entry 5====== 00:11:42.297 trtype: tcp 00:11:42.297 adrfam: ipv4 00:11:42.297 subtype: discovery subsystem referral 00:11:42.297 treq: not required 00:11:42.297 portid: 0 00:11:42.297 trsvcid: 4430 00:11:42.297 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:42.297 traddr: 10.0.0.2 00:11:42.297 eflags: none 00:11:42.297 sectype: none 00:11:42.297 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:42.297 Perform nvmf subsystem discovery via RPC 00:11:42.297 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:42.297 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.297 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.297 [ 00:11:42.297 { 00:11:42.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:42.297 "subtype": "Discovery", 00:11:42.297 "listen_addresses": [ 00:11:42.297 { 00:11:42.297 "trtype": "TCP", 00:11:42.297 "adrfam": "IPv4", 00:11:42.297 "traddr": "10.0.0.2", 00:11:42.297 "trsvcid": "4420" 00:11:42.297 } 00:11:42.297 ], 00:11:42.297 "allow_any_host": true, 00:11:42.297 "hosts": [] 00:11:42.297 }, 00:11:42.297 { 00:11:42.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.297 "subtype": "NVMe", 00:11:42.297 "listen_addresses": [ 00:11:42.297 { 00:11:42.297 "trtype": "TCP", 00:11:42.297 "adrfam": "IPv4", 00:11:42.297 "traddr": "10.0.0.2", 00:11:42.297 "trsvcid": "4420" 00:11:42.297 } 00:11:42.297 ], 00:11:42.297 "allow_any_host": true, 00:11:42.297 "hosts": [], 00:11:42.297 "serial_number": "SPDK00000000000001", 00:11:42.298 "model_number": "SPDK bdev Controller", 00:11:42.298 "max_namespaces": 32, 00:11:42.298 "min_cntlid": 1, 00:11:42.298 "max_cntlid": 65519, 00:11:42.298 "namespaces": [ 00:11:42.298 { 00:11:42.298 "nsid": 1, 00:11:42.298 "bdev_name": "Null1", 00:11:42.298 "name": "Null1", 00:11:42.298 "nguid": "A0851F6C326C49A6BC4CE967867FFC09", 00:11:42.298 "uuid": "a0851f6c-326c-49a6-bc4c-e967867ffc09" 00:11:42.298 } 00:11:42.298 ] 00:11:42.298 }, 00:11:42.298 { 00:11:42.298 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:42.298 "subtype": "NVMe", 00:11:42.298 "listen_addresses": [ 00:11:42.298 { 00:11:42.298 "trtype": "TCP", 00:11:42.298 "adrfam": "IPv4", 00:11:42.298 "traddr": "10.0.0.2", 00:11:42.298 "trsvcid": "4420" 00:11:42.298 } 00:11:42.298 ], 00:11:42.298 "allow_any_host": true, 00:11:42.298 "hosts": [], 00:11:42.298 "serial_number": "SPDK00000000000002", 00:11:42.298 "model_number": "SPDK bdev Controller", 00:11:42.298 "max_namespaces": 32, 00:11:42.298 "min_cntlid": 1, 00:11:42.298 "max_cntlid": 65519, 00:11:42.298 "namespaces": [ 00:11:42.298 { 00:11:42.298 "nsid": 1, 00:11:42.298 "bdev_name": "Null2", 00:11:42.298 "name": "Null2", 00:11:42.298 "nguid": "EE5D013CD389458A9CB25FCB9AB057CE", 00:11:42.298 "uuid": "ee5d013c-d389-458a-9cb2-5fcb9ab057ce" 00:11:42.298 } 00:11:42.298 ] 00:11:42.298 }, 00:11:42.298 { 00:11:42.298 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:42.298 "subtype": "NVMe", 00:11:42.298 "listen_addresses": [ 00:11:42.298 { 00:11:42.298 "trtype": "TCP", 00:11:42.298 "adrfam": "IPv4", 00:11:42.298 "traddr": "10.0.0.2", 00:11:42.298 "trsvcid": "4420" 00:11:42.298 } 00:11:42.298 ], 00:11:42.298 "allow_any_host": true, 00:11:42.298 "hosts": [], 00:11:42.298 "serial_number": "SPDK00000000000003", 00:11:42.298 "model_number": "SPDK bdev Controller", 00:11:42.298 "max_namespaces": 32, 00:11:42.298 "min_cntlid": 1, 00:11:42.298 "max_cntlid": 65519, 00:11:42.298 "namespaces": [ 00:11:42.298 { 00:11:42.298 "nsid": 1, 00:11:42.298 "bdev_name": "Null3", 00:11:42.298 "name": "Null3", 00:11:42.298 "nguid": "1C96A45A599D4698B46754A082246260", 00:11:42.298 "uuid": "1c96a45a-599d-4698-b467-54a082246260" 00:11:42.298 } 00:11:42.298 ] 00:11:42.298 }, 00:11:42.298 { 00:11:42.298 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:42.298 "subtype": "NVMe", 00:11:42.298 "listen_addresses": [ 00:11:42.298 { 00:11:42.298 "trtype": "TCP", 00:11:42.298 "adrfam": "IPv4", 00:11:42.298 "traddr": "10.0.0.2", 00:11:42.298 "trsvcid": "4420" 00:11:42.298 } 00:11:42.298 ], 00:11:42.298 "allow_any_host": true, 00:11:42.298 "hosts": [], 00:11:42.298 "serial_number": "SPDK00000000000004", 00:11:42.298 "model_number": "SPDK bdev Controller", 00:11:42.298 "max_namespaces": 32, 00:11:42.298 "min_cntlid": 1, 00:11:42.298 "max_cntlid": 65519, 00:11:42.298 "namespaces": [ 00:11:42.298 { 00:11:42.298 "nsid": 1, 00:11:42.298 "bdev_name": "Null4", 00:11:42.298 "name": "Null4", 00:11:42.298 "nguid": "CD32ED5C61DD41E59A9F18CA79A5C233", 00:11:42.298 "uuid": "cd32ed5c-61dd-41e5-9a9f-18ca79a5c233" 00:11:42.298 } 00:11:42.298 ] 00:11:42.298 } 00:11:42.298 ] 00:11:42.298 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.298 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:42.298 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.298 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.298 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.298 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.298 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.298 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:42.298 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.298 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:42.298 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.558 rmmod nvme_tcp 00:11:42.558 rmmod nvme_fabrics 00:11:42.558 rmmod nvme_keyring 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1168226 ']' 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1168226 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 1168226 ']' 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 1168226 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1168226 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1168226' 00:11:42.558 killing process with pid 1168226 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 1168226 00:11:42.558 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 1168226 00:11:42.818 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.818 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.818 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.818 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:42.818 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.818 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:42.818 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.818 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.818 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.819 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.819 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.819 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.796 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.796 00:11:44.796 real 0m12.586s 00:11:44.796 user 0m8.781s 00:11:44.796 sys 0m6.821s 00:11:44.796 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:44.796 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.796 ************************************ 00:11:44.796 END TEST nvmf_target_discovery 00:11:44.796 ************************************ 00:11:44.796 07:12:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:44.796 07:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:44.796 07:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:44.796 07:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.796 ************************************ 00:11:44.796 START TEST nvmf_referrals 00:11:44.796 ************************************ 00:11:44.796 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:45.056 * Looking for test storage... 00:11:45.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:45.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.056 --rc genhtml_branch_coverage=1 00:11:45.056 --rc genhtml_function_coverage=1 00:11:45.056 --rc genhtml_legend=1 00:11:45.056 --rc geninfo_all_blocks=1 00:11:45.056 --rc geninfo_unexecuted_blocks=1 00:11:45.056 00:11:45.056 ' 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:45.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.056 --rc genhtml_branch_coverage=1 00:11:45.056 --rc genhtml_function_coverage=1 00:11:45.056 --rc genhtml_legend=1 00:11:45.056 --rc geninfo_all_blocks=1 00:11:45.056 --rc geninfo_unexecuted_blocks=1 00:11:45.056 00:11:45.056 ' 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:45.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.056 --rc genhtml_branch_coverage=1 00:11:45.056 --rc genhtml_function_coverage=1 00:11:45.056 --rc genhtml_legend=1 00:11:45.056 --rc geninfo_all_blocks=1 00:11:45.056 --rc geninfo_unexecuted_blocks=1 00:11:45.056 00:11:45.056 ' 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:45.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.056 --rc genhtml_branch_coverage=1 00:11:45.056 --rc genhtml_function_coverage=1 00:11:45.056 --rc genhtml_legend=1 00:11:45.056 --rc geninfo_all_blocks=1 00:11:45.056 --rc geninfo_unexecuted_blocks=1 00:11:45.056 00:11:45.056 ' 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:45.056 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:45.057 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:45.057 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.057 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:45.057 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:45.057 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:45.057 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.057 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.057 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.057 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:45.057 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:45.057 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.057 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:53.192 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:53.192 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.192 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:53.193 Found net devices under 0000:31:00.0: cvl_0_0 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:53.193 Found net devices under 0000:31:00.1: cvl_0_1 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.193 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.454 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.454 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.454 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.454 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.454 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.454 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.454 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.454 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:11:53.454 00:11:53.454 --- 10.0.0.2 ping statistics --- 00:11:53.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.454 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:11:53.454 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:11:53.454 00:11:53.454 --- 10.0.0.1 ping statistics --- 00:11:53.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.454 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:11:53.455 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.455 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:53.455 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.455 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.455 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.455 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.455 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.455 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.455 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.768 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:53.768 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.768 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.768 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.768 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1173276 00:11:53.768 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1173276 00:11:53.768 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.768 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 1173276 ']' 00:11:53.768 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.768 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:53.768 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.768 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:53.768 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.768 [2024-11-20 07:12:28.289074] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:11:53.768 [2024-11-20 07:12:28.289142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.768 [2024-11-20 07:12:28.380080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.768 [2024-11-20 07:12:28.421650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.768 [2024-11-20 07:12:28.421686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.768 [2024-11-20 07:12:28.421694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.768 [2024-11-20 07:12:28.421701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.768 [2024-11-20 07:12:28.421706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.768 [2024-11-20 07:12:28.423547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.768 [2024-11-20 07:12:28.423664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.768 [2024-11-20 07:12:28.423806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.768 [2024-11-20 07:12:28.423807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.340 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:54.340 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:54.340 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.340 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.340 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.600 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.600 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.600 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.600 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.600 [2024-11-20 07:12:29.127698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.600 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.601 [2024-11-20 07:12:29.152020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.601 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:54.861 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:54.861 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:54.861 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:54.861 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.861 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.861 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.862 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:55.122 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:55.123 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.384 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.384 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:55.384 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:55.384 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:55.384 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:55.384 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:55.384 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.384 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:55.384 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:55.384 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:55.384 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:55.384 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:55.384 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:55.384 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:55.384 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.384 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:55.644 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:55.644 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:55.644 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:55.644 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:55.644 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.644 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.905 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:56.166 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:56.166 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:56.166 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:56.166 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:56.166 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:56.166 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:56.166 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:56.166 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:56.166 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:56.166 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:56.166 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:56.427 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:56.427 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:56.427 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:56.687 [2024-11-20 07:12:31.395221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f8de0 is same with the state(6) to be set 00:11:56.687 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:56.687 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:56.687 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:56.687 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:56.687 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:56.687 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:56.687 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.687 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:56.687 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.687 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.687 rmmod nvme_tcp 00:11:56.687 rmmod nvme_fabrics 00:11:56.687 rmmod nvme_keyring 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1173276 ']' 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1173276 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 1173276 ']' 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 1173276 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1173276 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1173276' 00:11:56.947 killing process with pid 1173276 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 1173276 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 1173276 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.947 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.489 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:59.490 00:11:59.490 real 0m14.197s 00:11:59.490 user 0m16.301s 00:11:59.490 sys 0m7.133s 00:11:59.490 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:59.490 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.490 ************************************ 00:11:59.490 END TEST nvmf_referrals 00:11:59.490 ************************************ 00:11:59.490 07:12:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:59.490 07:12:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:59.490 07:12:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:59.490 07:12:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.490 ************************************ 00:11:59.490 START TEST nvmf_connect_disconnect 00:11:59.490 ************************************ 00:11:59.490 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:59.490 * Looking for test storage... 00:11:59.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.490 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:59.490 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:59.490 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:59.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.490 --rc genhtml_branch_coverage=1 00:11:59.490 --rc genhtml_function_coverage=1 00:11:59.490 --rc genhtml_legend=1 00:11:59.490 --rc geninfo_all_blocks=1 00:11:59.490 --rc geninfo_unexecuted_blocks=1 00:11:59.490 00:11:59.490 ' 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:59.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.490 --rc genhtml_branch_coverage=1 00:11:59.490 --rc genhtml_function_coverage=1 00:11:59.490 --rc genhtml_legend=1 00:11:59.490 --rc geninfo_all_blocks=1 00:11:59.490 --rc geninfo_unexecuted_blocks=1 00:11:59.490 00:11:59.490 ' 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:59.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.490 --rc genhtml_branch_coverage=1 00:11:59.490 --rc genhtml_function_coverage=1 00:11:59.490 --rc genhtml_legend=1 00:11:59.490 --rc geninfo_all_blocks=1 00:11:59.490 --rc geninfo_unexecuted_blocks=1 00:11:59.490 00:11:59.490 ' 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:59.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.490 --rc genhtml_branch_coverage=1 00:11:59.490 --rc genhtml_function_coverage=1 00:11:59.490 --rc genhtml_legend=1 00:11:59.490 --rc geninfo_all_blocks=1 00:11:59.490 --rc geninfo_unexecuted_blocks=1 00:11:59.490 00:11:59.490 ' 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.490 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.491 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:07.628 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:07.628 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:07.628 Found net devices under 0000:31:00.0: cvl_0_0 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:07.628 Found net devices under 0000:31:00.1: cvl_0_1 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.628 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.629 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:12:07.890 00:12:07.890 --- 10.0.0.2 ping statistics --- 00:12:07.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.890 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:12:07.890 00:12:07.890 --- 10.0.0.1 ping statistics --- 00:12:07.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.890 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1178733 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1178733 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 1178733 ']' 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:07.890 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.890 [2024-11-20 07:12:42.586370] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:12:07.890 [2024-11-20 07:12:42.586441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.150 [2024-11-20 07:12:42.679002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.150 [2024-11-20 07:12:42.721449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.150 [2024-11-20 07:12:42.721485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.150 [2024-11-20 07:12:42.721494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.150 [2024-11-20 07:12:42.721500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.150 [2024-11-20 07:12:42.721506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.150 [2024-11-20 07:12:42.723386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.150 [2024-11-20 07:12:42.723508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.150 [2024-11-20 07:12:42.723664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.150 [2024-11-20 07:12:42.723665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.720 [2024-11-20 07:12:43.443833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.720 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.980 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.980 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.980 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.980 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.980 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.981 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.981 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.981 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.981 [2024-11-20 07:12:43.511335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.981 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.981 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:08.981 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:08.981 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:12.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:27.473 rmmod nvme_tcp 00:12:27.473 rmmod nvme_fabrics 00:12:27.473 rmmod nvme_keyring 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1178733 ']' 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1178733 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1178733 ']' 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 1178733 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1178733 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1178733' 00:12:27.473 killing process with pid 1178733 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 1178733 00:12:27.473 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 1178733 00:12:27.473 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:27.473 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:27.473 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:27.473 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:27.473 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:27.473 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:27.473 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:27.473 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:27.473 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:27.473 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.473 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.473 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:30.021 00:12:30.021 real 0m30.340s 00:12:30.021 user 1m19.505s 00:12:30.021 sys 0m7.800s 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.021 ************************************ 00:12:30.021 END TEST nvmf_connect_disconnect 00:12:30.021 ************************************ 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.021 ************************************ 00:12:30.021 START TEST nvmf_multitarget 00:12:30.021 ************************************ 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:30.021 * Looking for test storage... 00:12:30.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:30.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.021 --rc genhtml_branch_coverage=1 00:12:30.021 --rc genhtml_function_coverage=1 00:12:30.021 --rc genhtml_legend=1 00:12:30.021 --rc geninfo_all_blocks=1 00:12:30.021 --rc geninfo_unexecuted_blocks=1 00:12:30.021 00:12:30.021 ' 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:30.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.021 --rc genhtml_branch_coverage=1 00:12:30.021 --rc genhtml_function_coverage=1 00:12:30.021 --rc genhtml_legend=1 00:12:30.021 --rc geninfo_all_blocks=1 00:12:30.021 --rc geninfo_unexecuted_blocks=1 00:12:30.021 00:12:30.021 ' 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:30.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.021 --rc genhtml_branch_coverage=1 00:12:30.021 --rc genhtml_function_coverage=1 00:12:30.021 --rc genhtml_legend=1 00:12:30.021 --rc geninfo_all_blocks=1 00:12:30.021 --rc geninfo_unexecuted_blocks=1 00:12:30.021 00:12:30.021 ' 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:30.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.021 --rc genhtml_branch_coverage=1 00:12:30.021 --rc genhtml_function_coverage=1 00:12:30.021 --rc genhtml_legend=1 00:12:30.021 --rc geninfo_all_blocks=1 00:12:30.021 --rc geninfo_unexecuted_blocks=1 00:12:30.021 00:12:30.021 ' 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.021 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.022 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.162 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:38.163 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:38.163 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:38.163 Found net devices under 0000:31:00.0: cvl_0_0 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:38.163 Found net devices under 0000:31:00.1: cvl_0_1 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:38.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:12:38.163 00:12:38.163 --- 10.0.0.2 ping statistics --- 00:12:38.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.163 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:12:38.163 00:12:38.163 --- 10.0.0.1 ping statistics --- 00:12:38.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.163 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1187420 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1187420 00:12:38.163 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.164 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 1187420 ']' 00:12:38.164 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.164 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:38.164 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.164 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:38.164 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.164 [2024-11-20 07:13:12.916668] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:12:38.164 [2024-11-20 07:13:12.916738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.424 [2024-11-20 07:13:13.009745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.424 [2024-11-20 07:13:13.051185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.424 [2024-11-20 07:13:13.051224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.424 [2024-11-20 07:13:13.051237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.424 [2024-11-20 07:13:13.051243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.424 [2024-11-20 07:13:13.051249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.424 [2024-11-20 07:13:13.053129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.424 [2024-11-20 07:13:13.053251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.424 [2024-11-20 07:13:13.053406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.424 [2024-11-20 07:13:13.053407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.994 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:38.994 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:38.994 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.994 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:38.994 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.255 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.255 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:39.255 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:39.255 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:39.255 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:39.255 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:39.255 "nvmf_tgt_1" 00:12:39.255 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:39.516 "nvmf_tgt_2" 00:12:39.516 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:39.516 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:39.516 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:39.516 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:39.516 true 00:12:39.776 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:39.776 true 00:12:39.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:39.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:39.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:39.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:39.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:39.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:39.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:39.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:39.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:39.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:39.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:39.777 rmmod nvme_tcp 00:12:39.777 rmmod nvme_fabrics 00:12:39.777 rmmod nvme_keyring 00:12:39.777 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.037 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:40.037 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:40.037 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1187420 ']' 00:12:40.037 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1187420 00:12:40.037 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 1187420 ']' 00:12:40.037 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 1187420 00:12:40.037 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:40.037 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:40.037 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1187420 00:12:40.037 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:40.037 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1187420' 00:12:40.038 killing process with pid 1187420 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 1187420 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 1187420 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.038 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.581 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.581 00:12:42.581 real 0m12.575s 00:12:42.581 user 0m9.887s 00:12:42.581 sys 0m6.758s 00:12:42.581 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:42.581 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.581 ************************************ 00:12:42.581 END TEST nvmf_multitarget 00:12:42.581 ************************************ 00:12:42.581 07:13:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:42.581 07:13:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:42.581 07:13:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:42.581 07:13:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:42.581 ************************************ 00:12:42.581 START TEST nvmf_rpc 00:12:42.581 ************************************ 00:12:42.581 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:42.581 * Looking for test storage... 00:12:42.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:42.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.581 --rc genhtml_branch_coverage=1 00:12:42.581 --rc genhtml_function_coverage=1 00:12:42.581 --rc genhtml_legend=1 00:12:42.581 --rc geninfo_all_blocks=1 00:12:42.581 --rc geninfo_unexecuted_blocks=1 00:12:42.581 00:12:42.581 ' 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:42.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.581 --rc genhtml_branch_coverage=1 00:12:42.581 --rc genhtml_function_coverage=1 00:12:42.581 --rc genhtml_legend=1 00:12:42.581 --rc geninfo_all_blocks=1 00:12:42.581 --rc geninfo_unexecuted_blocks=1 00:12:42.581 00:12:42.581 ' 00:12:42.581 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:42.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.582 --rc genhtml_branch_coverage=1 00:12:42.582 --rc genhtml_function_coverage=1 00:12:42.582 --rc genhtml_legend=1 00:12:42.582 --rc geninfo_all_blocks=1 00:12:42.582 --rc geninfo_unexecuted_blocks=1 00:12:42.582 00:12:42.582 ' 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:42.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.582 --rc genhtml_branch_coverage=1 00:12:42.582 --rc genhtml_function_coverage=1 00:12:42.582 --rc genhtml_legend=1 00:12:42.582 --rc geninfo_all_blocks=1 00:12:42.582 --rc geninfo_unexecuted_blocks=1 00:12:42.582 00:12:42.582 ' 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:42.582 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:42.583 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.583 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:50.720 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:50.720 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:50.720 Found net devices under 0000:31:00.0: cvl_0_0 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.720 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:50.721 Found net devices under 0000:31:00.1: cvl_0_1 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.721 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:50.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:12:50.983 00:12:50.983 --- 10.0.0.2 ping statistics --- 00:12:50.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.983 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:12:50.983 00:12:50.983 --- 10.0.0.1 ping statistics --- 00:12:50.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.983 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1192607 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1192607 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 1192607 ']' 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:50.983 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.983 [2024-11-20 07:13:25.613507] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:12:50.983 [2024-11-20 07:13:25.613572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.983 [2024-11-20 07:13:25.704210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.983 [2024-11-20 07:13:25.745722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.983 [2024-11-20 07:13:25.745757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.983 [2024-11-20 07:13:25.745765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.983 [2024-11-20 07:13:25.745772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.983 [2024-11-20 07:13:25.745778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.983 [2024-11-20 07:13:25.747662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.983 [2024-11-20 07:13:25.747779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.983 [2024-11-20 07:13:25.747926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.244 [2024-11-20 07:13:25.747926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:51.815 "tick_rate": 2400000000, 00:12:51.815 "poll_groups": [ 00:12:51.815 { 00:12:51.815 "name": "nvmf_tgt_poll_group_000", 00:12:51.815 "admin_qpairs": 0, 00:12:51.815 "io_qpairs": 0, 00:12:51.815 "current_admin_qpairs": 0, 00:12:51.815 "current_io_qpairs": 0, 00:12:51.815 "pending_bdev_io": 0, 00:12:51.815 "completed_nvme_io": 0, 00:12:51.815 "transports": [] 00:12:51.815 }, 00:12:51.815 { 00:12:51.815 "name": "nvmf_tgt_poll_group_001", 00:12:51.815 "admin_qpairs": 0, 00:12:51.815 "io_qpairs": 0, 00:12:51.815 "current_admin_qpairs": 0, 00:12:51.815 "current_io_qpairs": 0, 00:12:51.815 "pending_bdev_io": 0, 00:12:51.815 "completed_nvme_io": 0, 00:12:51.815 "transports": [] 00:12:51.815 }, 00:12:51.815 { 00:12:51.815 "name": "nvmf_tgt_poll_group_002", 00:12:51.815 "admin_qpairs": 0, 00:12:51.815 "io_qpairs": 0, 00:12:51.815 "current_admin_qpairs": 0, 00:12:51.815 "current_io_qpairs": 0, 00:12:51.815 "pending_bdev_io": 0, 00:12:51.815 "completed_nvme_io": 0, 00:12:51.815 "transports": [] 00:12:51.815 }, 00:12:51.815 { 00:12:51.815 "name": "nvmf_tgt_poll_group_003", 00:12:51.815 "admin_qpairs": 0, 00:12:51.815 "io_qpairs": 0, 00:12:51.815 "current_admin_qpairs": 0, 00:12:51.815 "current_io_qpairs": 0, 00:12:51.815 "pending_bdev_io": 0, 00:12:51.815 "completed_nvme_io": 0, 00:12:51.815 "transports": [] 00:12:51.815 } 00:12:51.815 ] 00:12:51.815 }' 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:51.815 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:52.076 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:52.076 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.076 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.076 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.076 [2024-11-20 07:13:26.592182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:52.077 "tick_rate": 2400000000, 00:12:52.077 "poll_groups": [ 00:12:52.077 { 00:12:52.077 "name": "nvmf_tgt_poll_group_000", 00:12:52.077 "admin_qpairs": 0, 00:12:52.077 "io_qpairs": 0, 00:12:52.077 "current_admin_qpairs": 0, 00:12:52.077 "current_io_qpairs": 0, 00:12:52.077 "pending_bdev_io": 0, 00:12:52.077 "completed_nvme_io": 0, 00:12:52.077 "transports": [ 00:12:52.077 { 00:12:52.077 "trtype": "TCP" 00:12:52.077 } 00:12:52.077 ] 00:12:52.077 }, 00:12:52.077 { 00:12:52.077 "name": "nvmf_tgt_poll_group_001", 00:12:52.077 "admin_qpairs": 0, 00:12:52.077 "io_qpairs": 0, 00:12:52.077 "current_admin_qpairs": 0, 00:12:52.077 "current_io_qpairs": 0, 00:12:52.077 "pending_bdev_io": 0, 00:12:52.077 "completed_nvme_io": 0, 00:12:52.077 "transports": [ 00:12:52.077 { 00:12:52.077 "trtype": "TCP" 00:12:52.077 } 00:12:52.077 ] 00:12:52.077 }, 00:12:52.077 { 00:12:52.077 "name": "nvmf_tgt_poll_group_002", 00:12:52.077 "admin_qpairs": 0, 00:12:52.077 "io_qpairs": 0, 00:12:52.077 "current_admin_qpairs": 0, 00:12:52.077 "current_io_qpairs": 0, 00:12:52.077 "pending_bdev_io": 0, 00:12:52.077 "completed_nvme_io": 0, 00:12:52.077 "transports": [ 00:12:52.077 { 00:12:52.077 "trtype": "TCP" 00:12:52.077 } 00:12:52.077 ] 00:12:52.077 }, 00:12:52.077 { 00:12:52.077 "name": "nvmf_tgt_poll_group_003", 00:12:52.077 "admin_qpairs": 0, 00:12:52.077 "io_qpairs": 0, 00:12:52.077 "current_admin_qpairs": 0, 00:12:52.077 "current_io_qpairs": 0, 00:12:52.077 "pending_bdev_io": 0, 00:12:52.077 "completed_nvme_io": 0, 00:12:52.077 "transports": [ 00:12:52.077 { 00:12:52.077 "trtype": "TCP" 00:12:52.077 } 00:12:52.077 ] 00:12:52.077 } 00:12:52.077 ] 00:12:52.077 }' 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.077 Malloc1 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.077 [2024-11-20 07:13:26.799835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:52.077 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:52.077 [2024-11-20 07:13:26.836691] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:52.338 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:52.338 could not add new controller: failed to write to nvme-fabrics device 00:12:52.338 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:52.338 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:52.338 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:52.338 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:52.338 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:52.338 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.338 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.338 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.338 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.748 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.748 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:53.748 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.748 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:53.748 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:55.659 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:55.659 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:55.659 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.659 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:55.659 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.659 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:55.659 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.921 [2024-11-20 07:13:30.573309] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:55.921 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:55.921 could not add new controller: failed to write to nvme-fabrics device 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.921 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.834 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.834 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:57.834 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.834 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:57.834 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.747 [2024-11-20 07:13:34.296039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.747 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.132 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.132 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:01.132 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.132 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:01.132 07:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:03.046 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:03.046 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:03.046 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.309 07:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.309 [2024-11-20 07:13:38.019000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.309 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.224 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.224 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:05.224 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.224 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:05.224 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.140 [2024-11-20 07:13:41.746078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.140 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.056 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.056 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:09.056 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.056 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:09.056 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.973 [2024-11-20 07:13:45.502974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.973 07:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.362 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.362 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:12.362 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.362 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:12.362 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.910 [2024-11-20 07:13:49.262592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.910 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.911 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.911 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.911 07:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.298 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.298 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:16.298 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.298 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:16.298 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:18.211 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:18.211 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:18.212 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.212 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:18.212 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.212 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:18.212 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.212 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.212 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:18.212 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:18.212 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.473 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.473 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:18.473 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:18.473 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.473 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 [2024-11-20 07:13:53.046835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 [2024-11-20 07:13:53.106970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.473 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.474 [2024-11-20 07:13:53.167138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.474 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.474 [2024-11-20 07:13:53.235341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.735 [2024-11-20 07:13:53.303561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.735 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:18.736 "tick_rate": 2400000000, 00:13:18.736 "poll_groups": [ 00:13:18.736 { 00:13:18.736 "name": "nvmf_tgt_poll_group_000", 00:13:18.736 "admin_qpairs": 0, 00:13:18.736 "io_qpairs": 224, 00:13:18.736 "current_admin_qpairs": 0, 00:13:18.736 "current_io_qpairs": 0, 00:13:18.736 "pending_bdev_io": 0, 00:13:18.736 "completed_nvme_io": 530, 00:13:18.736 "transports": [ 00:13:18.736 { 00:13:18.736 "trtype": "TCP" 00:13:18.736 } 00:13:18.736 ] 00:13:18.736 }, 00:13:18.736 { 00:13:18.736 "name": "nvmf_tgt_poll_group_001", 00:13:18.736 "admin_qpairs": 1, 00:13:18.736 "io_qpairs": 223, 00:13:18.736 "current_admin_qpairs": 0, 00:13:18.736 "current_io_qpairs": 0, 00:13:18.736 "pending_bdev_io": 0, 00:13:18.736 "completed_nvme_io": 235, 00:13:18.736 "transports": [ 00:13:18.736 { 00:13:18.736 "trtype": "TCP" 00:13:18.736 } 00:13:18.736 ] 00:13:18.736 }, 00:13:18.736 { 00:13:18.736 "name": "nvmf_tgt_poll_group_002", 00:13:18.736 "admin_qpairs": 6, 00:13:18.736 "io_qpairs": 218, 00:13:18.736 "current_admin_qpairs": 0, 00:13:18.736 "current_io_qpairs": 0, 00:13:18.736 "pending_bdev_io": 0, 00:13:18.736 "completed_nvme_io": 220, 00:13:18.736 "transports": [ 00:13:18.736 { 00:13:18.736 "trtype": "TCP" 00:13:18.736 } 00:13:18.736 ] 00:13:18.736 }, 00:13:18.736 { 00:13:18.736 "name": "nvmf_tgt_poll_group_003", 00:13:18.736 "admin_qpairs": 0, 00:13:18.736 "io_qpairs": 224, 00:13:18.736 "current_admin_qpairs": 0, 00:13:18.736 "current_io_qpairs": 0, 00:13:18.736 "pending_bdev_io": 0, 00:13:18.736 "completed_nvme_io": 254, 00:13:18.736 "transports": [ 00:13:18.736 { 00:13:18.736 "trtype": "TCP" 00:13:18.736 } 00:13:18.736 ] 00:13:18.736 } 00:13:18.736 ] 00:13:18.736 }' 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:18.736 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:18.736 rmmod nvme_tcp 00:13:18.736 rmmod nvme_fabrics 00:13:18.997 rmmod nvme_keyring 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1192607 ']' 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1192607 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 1192607 ']' 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 1192607 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1192607 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1192607' 00:13:18.997 killing process with pid 1192607 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 1192607 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 1192607 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.997 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.541 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:21.541 00:13:21.541 real 0m38.919s 00:13:21.541 user 1m54.170s 00:13:21.541 sys 0m8.490s 00:13:21.541 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:21.541 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.541 ************************************ 00:13:21.541 END TEST nvmf_rpc 00:13:21.541 ************************************ 00:13:21.541 07:13:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:21.541 07:13:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:21.541 07:13:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:21.541 07:13:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:21.541 ************************************ 00:13:21.541 START TEST nvmf_invalid 00:13:21.541 ************************************ 00:13:21.541 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:21.541 * Looking for test storage... 00:13:21.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.541 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:21.541 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:21.541 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:21.541 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:21.541 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.541 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.541 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.541 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.541 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.541 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.541 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:21.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.542 --rc genhtml_branch_coverage=1 00:13:21.542 --rc genhtml_function_coverage=1 00:13:21.542 --rc genhtml_legend=1 00:13:21.542 --rc geninfo_all_blocks=1 00:13:21.542 --rc geninfo_unexecuted_blocks=1 00:13:21.542 00:13:21.542 ' 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:21.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.542 --rc genhtml_branch_coverage=1 00:13:21.542 --rc genhtml_function_coverage=1 00:13:21.542 --rc genhtml_legend=1 00:13:21.542 --rc geninfo_all_blocks=1 00:13:21.542 --rc geninfo_unexecuted_blocks=1 00:13:21.542 00:13:21.542 ' 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:21.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.542 --rc genhtml_branch_coverage=1 00:13:21.542 --rc genhtml_function_coverage=1 00:13:21.542 --rc genhtml_legend=1 00:13:21.542 --rc geninfo_all_blocks=1 00:13:21.542 --rc geninfo_unexecuted_blocks=1 00:13:21.542 00:13:21.542 ' 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:21.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.542 --rc genhtml_branch_coverage=1 00:13:21.542 --rc genhtml_function_coverage=1 00:13:21.542 --rc genhtml_legend=1 00:13:21.542 --rc geninfo_all_blocks=1 00:13:21.542 --rc geninfo_unexecuted_blocks=1 00:13:21.542 00:13:21.542 ' 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:21.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.542 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.543 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:21.543 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:21.543 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:21.543 07:13:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.688 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.688 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:29.688 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:29.688 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:29.688 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:29.688 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:29.688 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:29.688 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:29.689 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:29.689 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:29.689 Found net devices under 0000:31:00.0: cvl_0_0 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:29.689 Found net devices under 0000:31:00.1: cvl_0_1 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:29.689 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:29.690 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:29.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.716 ms 00:13:29.953 00:13:29.953 --- 10.0.0.2 ping statistics --- 00:13:29.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.953 rtt min/avg/max/mdev = 0.716/0.716/0.716/0.000 ms 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:13:29.953 00:13:29.953 --- 10.0.0.1 ping statistics --- 00:13:29.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.953 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1202820 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1202820 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 1202820 ']' 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.953 07:14:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.953 [2024-11-20 07:14:04.624113] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:13:29.953 [2024-11-20 07:14:04.624179] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.953 [2024-11-20 07:14:04.714929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.215 [2024-11-20 07:14:04.756241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.215 [2024-11-20 07:14:04.756278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.215 [2024-11-20 07:14:04.756287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.215 [2024-11-20 07:14:04.756294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.215 [2024-11-20 07:14:04.756300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.215 [2024-11-20 07:14:04.757904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.215 [2024-11-20 07:14:04.758106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.215 [2024-11-20 07:14:04.758261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.215 [2024-11-20 07:14:04.758261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.786 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:30.786 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:13:30.786 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:30.786 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:30.786 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:30.786 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.786 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:30.786 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30913 00:13:31.046 [2024-11-20 07:14:05.618289] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:31.046 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:31.046 { 00:13:31.046 "nqn": "nqn.2016-06.io.spdk:cnode30913", 00:13:31.046 "tgt_name": "foobar", 00:13:31.046 "method": "nvmf_create_subsystem", 00:13:31.046 "req_id": 1 00:13:31.046 } 00:13:31.046 Got JSON-RPC error response 00:13:31.046 response: 00:13:31.046 { 00:13:31.046 "code": -32603, 00:13:31.046 "message": "Unable to find target foobar" 00:13:31.046 }' 00:13:31.046 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:31.046 { 00:13:31.046 "nqn": "nqn.2016-06.io.spdk:cnode30913", 00:13:31.046 "tgt_name": "foobar", 00:13:31.047 "method": "nvmf_create_subsystem", 00:13:31.047 "req_id": 1 00:13:31.047 } 00:13:31.047 Got JSON-RPC error response 00:13:31.047 response: 00:13:31.047 { 00:13:31.047 "code": -32603, 00:13:31.047 "message": "Unable to find target foobar" 00:13:31.047 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:31.047 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:31.047 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5062 00:13:31.047 [2024-11-20 07:14:05.810955] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5062: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:31.308 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:31.308 { 00:13:31.308 "nqn": "nqn.2016-06.io.spdk:cnode5062", 00:13:31.308 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:31.308 "method": "nvmf_create_subsystem", 00:13:31.308 "req_id": 1 00:13:31.308 } 00:13:31.308 Got JSON-RPC error response 00:13:31.308 response: 00:13:31.308 { 00:13:31.308 "code": -32602, 00:13:31.308 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:31.308 }' 00:13:31.308 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:31.308 { 00:13:31.308 "nqn": "nqn.2016-06.io.spdk:cnode5062", 00:13:31.308 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:31.308 "method": "nvmf_create_subsystem", 00:13:31.308 "req_id": 1 00:13:31.308 } 00:13:31.308 Got JSON-RPC error response 00:13:31.308 response: 00:13:31.308 { 00:13:31.308 "code": -32602, 00:13:31.308 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:31.308 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:31.308 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:31.308 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30750 00:13:31.308 [2024-11-20 07:14:06.003530] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30750: invalid model number 'SPDK_Controller' 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:31.308 { 00:13:31.308 "nqn": "nqn.2016-06.io.spdk:cnode30750", 00:13:31.308 "model_number": "SPDK_Controller\u001f", 00:13:31.308 "method": "nvmf_create_subsystem", 00:13:31.308 "req_id": 1 00:13:31.308 } 00:13:31.308 Got JSON-RPC error response 00:13:31.308 response: 00:13:31.308 { 00:13:31.308 "code": -32602, 00:13:31.308 "message": "Invalid MN SPDK_Controller\u001f" 00:13:31.308 }' 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:31.308 { 00:13:31.308 "nqn": "nqn.2016-06.io.spdk:cnode30750", 00:13:31.308 "model_number": "SPDK_Controller\u001f", 00:13:31.308 "method": "nvmf_create_subsystem", 00:13:31.308 "req_id": 1 00:13:31.308 } 00:13:31.308 Got JSON-RPC error response 00:13:31.308 response: 00:13:31.308 { 00:13:31.308 "code": -32602, 00:13:31.308 "message": "Invalid MN SPDK_Controller\u001f" 00:13:31.308 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:31.308 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:31.569 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '=%zCs-;2J~ALuDsY.9- d' 00:13:31.570 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '=%zCs-;2J~ALuDsY.9- d' nqn.2016-06.io.spdk:cnode1623 00:13:31.831 [2024-11-20 07:14:06.360660] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1623: invalid serial number '=%zCs-;2J~ALuDsY.9- d' 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:31.831 { 00:13:31.831 "nqn": "nqn.2016-06.io.spdk:cnode1623", 00:13:31.831 "serial_number": "=%zCs-;2J~ALuDsY.9- d", 00:13:31.831 "method": "nvmf_create_subsystem", 00:13:31.831 "req_id": 1 00:13:31.831 } 00:13:31.831 Got JSON-RPC error response 00:13:31.831 response: 00:13:31.831 { 00:13:31.831 "code": -32602, 00:13:31.831 "message": "Invalid SN =%zCs-;2J~ALuDsY.9- d" 00:13:31.831 }' 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:31.831 { 00:13:31.831 "nqn": "nqn.2016-06.io.spdk:cnode1623", 00:13:31.831 "serial_number": "=%zCs-;2J~ALuDsY.9- d", 00:13:31.831 "method": "nvmf_create_subsystem", 00:13:31.831 "req_id": 1 00:13:31.831 } 00:13:31.831 Got JSON-RPC error response 00:13:31.831 response: 00:13:31.831 { 00:13:31.831 "code": -32602, 00:13:31.831 "message": "Invalid SN =%zCs-;2J~ALuDsY.9- d" 00:13:31.831 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.831 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.832 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.833 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:31.833 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:31.833 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:31.833 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.833 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.833 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:31.833 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:31.833 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ` == \- ]] 00:13:32.094 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '`kzJz$$)4y"f:>0T+0,i8!yyuD{C{y0T+0,i8!yyuD{C{y0T+0,i8!yyuD{C{y0T+0,i8!yyuD\u007f{C\u007f{y0T+0,i8!yyuD\u007f{C\u007f{y /dev/null' 00:13:34.183 07:14:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.728 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.728 00:13:36.728 real 0m14.985s 00:13:36.728 user 0m21.013s 00:13:36.729 sys 0m7.310s 00:13:36.729 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:36.729 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:36.729 ************************************ 00:13:36.729 END TEST nvmf_invalid 00:13:36.729 ************************************ 00:13:36.729 07:14:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:36.729 07:14:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:36.729 07:14:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:36.729 07:14:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:36.729 ************************************ 00:13:36.729 START TEST nvmf_connect_stress 00:13:36.729 ************************************ 00:13:36.729 07:14:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:36.729 * Looking for test storage... 00:13:36.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:36.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.729 --rc genhtml_branch_coverage=1 00:13:36.729 --rc genhtml_function_coverage=1 00:13:36.729 --rc genhtml_legend=1 00:13:36.729 --rc geninfo_all_blocks=1 00:13:36.729 --rc geninfo_unexecuted_blocks=1 00:13:36.729 00:13:36.729 ' 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:36.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.729 --rc genhtml_branch_coverage=1 00:13:36.729 --rc genhtml_function_coverage=1 00:13:36.729 --rc genhtml_legend=1 00:13:36.729 --rc geninfo_all_blocks=1 00:13:36.729 --rc geninfo_unexecuted_blocks=1 00:13:36.729 00:13:36.729 ' 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:36.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.729 --rc genhtml_branch_coverage=1 00:13:36.729 --rc genhtml_function_coverage=1 00:13:36.729 --rc genhtml_legend=1 00:13:36.729 --rc geninfo_all_blocks=1 00:13:36.729 --rc geninfo_unexecuted_blocks=1 00:13:36.729 00:13:36.729 ' 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:36.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.729 --rc genhtml_branch_coverage=1 00:13:36.729 --rc genhtml_function_coverage=1 00:13:36.729 --rc genhtml_legend=1 00:13:36.729 --rc geninfo_all_blocks=1 00:13:36.729 --rc geninfo_unexecuted_blocks=1 00:13:36.729 00:13:36.729 ' 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.729 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:36.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:36.730 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:44.876 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:44.876 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:44.876 Found net devices under 0000:31:00.0: cvl_0_0 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:44.876 Found net devices under 0000:31:00.1: cvl_0_1 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:44.876 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:45.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:13:45.138 00:13:45.138 --- 10.0.0.2 ping statistics --- 00:13:45.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.138 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:13:45.138 00:13:45.138 --- 10.0.0.1 ping statistics --- 00:13:45.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.138 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1208517 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1208517 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 1208517 ']' 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:45.138 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.138 [2024-11-20 07:14:19.786701] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:13:45.138 [2024-11-20 07:14:19.786773] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.138 [2024-11-20 07:14:19.895391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.399 [2024-11-20 07:14:19.947152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.399 [2024-11-20 07:14:19.947205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.399 [2024-11-20 07:14:19.947214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.399 [2024-11-20 07:14:19.947221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.399 [2024-11-20 07:14:19.947228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.399 [2024-11-20 07:14:19.949323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.399 [2024-11-20 07:14:19.949489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.399 [2024-11-20 07:14:19.949490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.971 [2024-11-20 07:14:20.651482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.971 [2024-11-20 07:14:20.675810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.971 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.972 NULL1 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1208720 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.972 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.233 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.493 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.494 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:46.494 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.494 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.494 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.753 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.753 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:46.753 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.753 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.753 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.015 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.015 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:47.015 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.015 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.015 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.585 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.585 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:47.585 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.585 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.585 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.879 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.879 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:47.879 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.879 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.879 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.140 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.140 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:48.140 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.140 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.140 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.400 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.400 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:48.400 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.400 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.400 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.659 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.659 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:48.659 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.659 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.659 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.230 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.230 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:49.230 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.230 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.230 07:14:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.490 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.490 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:49.490 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.490 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.490 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.750 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.750 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:49.750 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.750 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.750 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.011 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.011 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:50.011 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.011 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.011 07:14:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.271 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.271 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:50.271 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.271 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.271 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.843 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.843 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:50.843 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.843 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.843 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.192 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.192 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:51.192 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.192 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.192 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.475 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.475 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:51.475 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.475 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.475 07:14:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.792 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.792 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:51.792 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.792 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.792 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.110 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.110 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:52.110 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.110 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.110 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.372 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.372 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:52.372 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.372 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.372 07:14:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.633 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.633 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:52.633 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.633 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.633 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.895 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.895 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:52.895 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.895 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.895 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.475 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.475 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:53.475 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.475 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.475 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.740 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.740 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:53.740 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.740 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.740 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.002 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.002 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:54.002 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.002 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.002 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.264 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.264 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:54.264 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.264 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.264 07:14:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.526 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.526 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:54.526 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.526 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.526 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.176 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.176 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:55.176 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.176 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.176 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.176 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.176 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:55.176 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.176 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.176 07:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.748 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.748 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:55.748 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.748 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.748 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.009 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.009 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:56.009 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.009 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.009 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.270 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1208720 00:13:56.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1208720) - No such process 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1208720 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:56.270 rmmod nvme_tcp 00:13:56.270 rmmod nvme_fabrics 00:13:56.270 rmmod nvme_keyring 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1208517 ']' 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1208517 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 1208517 ']' 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 1208517 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:56.270 07:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1208517 00:13:56.531 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:56.531 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:56.531 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1208517' 00:13:56.531 killing process with pid 1208517 00:13:56.531 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 1208517 00:13:56.531 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 1208517 00:13:56.531 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:56.531 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:56.531 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:56.532 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:56.532 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:56.532 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:56.532 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:56.532 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:56.532 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:56.532 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.532 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.532 07:14:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:59.082 00:13:59.082 real 0m22.259s 00:13:59.082 user 0m42.512s 00:13:59.082 sys 0m9.991s 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.082 ************************************ 00:13:59.082 END TEST nvmf_connect_stress 00:13:59.082 ************************************ 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:59.082 ************************************ 00:13:59.082 START TEST nvmf_fused_ordering 00:13:59.082 ************************************ 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:59.082 * Looking for test storage... 00:13:59.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.082 --rc genhtml_branch_coverage=1 00:13:59.082 --rc genhtml_function_coverage=1 00:13:59.082 --rc genhtml_legend=1 00:13:59.082 --rc geninfo_all_blocks=1 00:13:59.082 --rc geninfo_unexecuted_blocks=1 00:13:59.082 00:13:59.082 ' 00:13:59.082 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.082 --rc genhtml_branch_coverage=1 00:13:59.082 --rc genhtml_function_coverage=1 00:13:59.082 --rc genhtml_legend=1 00:13:59.082 --rc geninfo_all_blocks=1 00:13:59.082 --rc geninfo_unexecuted_blocks=1 00:13:59.083 00:13:59.083 ' 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:59.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.083 --rc genhtml_branch_coverage=1 00:13:59.083 --rc genhtml_function_coverage=1 00:13:59.083 --rc genhtml_legend=1 00:13:59.083 --rc geninfo_all_blocks=1 00:13:59.083 --rc geninfo_unexecuted_blocks=1 00:13:59.083 00:13:59.083 ' 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:59.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.083 --rc genhtml_branch_coverage=1 00:13:59.083 --rc genhtml_function_coverage=1 00:13:59.083 --rc genhtml_legend=1 00:13:59.083 --rc geninfo_all_blocks=1 00:13:59.083 --rc geninfo_unexecuted_blocks=1 00:13:59.083 00:13:59.083 ' 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:59.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:59.083 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:07.234 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:07.235 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:07.235 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:07.235 Found net devices under 0000:31:00.0: cvl_0_0 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:07.235 Found net devices under 0000:31:00.1: cvl_0_1 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:07.235 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:07.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:14:07.498 00:14:07.498 --- 10.0.0.2 ping statistics --- 00:14:07.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.498 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:14:07.498 00:14:07.498 --- 10.0.0.1 ping statistics --- 00:14:07.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.498 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1215466 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1215466 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 1215466 ']' 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:07.498 07:14:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.498 [2024-11-20 07:14:42.202621] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:14:07.498 [2024-11-20 07:14:42.202691] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.760 [2024-11-20 07:14:42.312876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.760 [2024-11-20 07:14:42.362328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.760 [2024-11-20 07:14:42.362382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.760 [2024-11-20 07:14:42.362391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.760 [2024-11-20 07:14:42.362398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.760 [2024-11-20 07:14:42.362404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.760 [2024-11-20 07:14:42.363217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.333 [2024-11-20 07:14:43.069433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.333 [2024-11-20 07:14:43.085666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.333 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.333 NULL1 00:14:08.594 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.594 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:08.594 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.594 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.594 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.594 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:08.594 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.594 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.594 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.594 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:08.594 [2024-11-20 07:14:43.143517] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:14:08.594 [2024-11-20 07:14:43.143562] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215809 ] 00:14:09.165 Attached to nqn.2016-06.io.spdk:cnode1 00:14:09.165 Namespace ID: 1 size: 1GB 00:14:09.165 fused_ordering(0) 00:14:09.165 fused_ordering(1) 00:14:09.166 fused_ordering(2) 00:14:09.166 fused_ordering(3) 00:14:09.166 fused_ordering(4) 00:14:09.166 fused_ordering(5) 00:14:09.166 fused_ordering(6) 00:14:09.166 fused_ordering(7) 00:14:09.166 fused_ordering(8) 00:14:09.166 fused_ordering(9) 00:14:09.166 fused_ordering(10) 00:14:09.166 fused_ordering(11) 00:14:09.166 fused_ordering(12) 00:14:09.166 fused_ordering(13) 00:14:09.166 fused_ordering(14) 00:14:09.166 fused_ordering(15) 00:14:09.166 fused_ordering(16) 00:14:09.166 fused_ordering(17) 00:14:09.166 fused_ordering(18) 00:14:09.166 fused_ordering(19) 00:14:09.166 fused_ordering(20) 00:14:09.166 fused_ordering(21) 00:14:09.166 fused_ordering(22) 00:14:09.166 fused_ordering(23) 00:14:09.166 fused_ordering(24) 00:14:09.166 fused_ordering(25) 00:14:09.166 fused_ordering(26) 00:14:09.166 fused_ordering(27) 00:14:09.166 fused_ordering(28) 00:14:09.166 fused_ordering(29) 00:14:09.166 fused_ordering(30) 00:14:09.166 fused_ordering(31) 00:14:09.166 fused_ordering(32) 00:14:09.166 fused_ordering(33) 00:14:09.166 fused_ordering(34) 00:14:09.166 fused_ordering(35) 00:14:09.166 fused_ordering(36) 00:14:09.166 fused_ordering(37) 00:14:09.166 fused_ordering(38) 00:14:09.166 fused_ordering(39) 00:14:09.166 fused_ordering(40) 00:14:09.166 fused_ordering(41) 00:14:09.166 fused_ordering(42) 00:14:09.166 fused_ordering(43) 00:14:09.166 fused_ordering(44) 00:14:09.166 fused_ordering(45) 00:14:09.166 fused_ordering(46) 00:14:09.166 fused_ordering(47) 00:14:09.166 fused_ordering(48) 00:14:09.166 fused_ordering(49) 00:14:09.166 fused_ordering(50) 00:14:09.166 fused_ordering(51) 00:14:09.166 fused_ordering(52) 00:14:09.166 fused_ordering(53) 00:14:09.166 fused_ordering(54) 00:14:09.166 fused_ordering(55) 00:14:09.166 fused_ordering(56) 00:14:09.166 fused_ordering(57) 00:14:09.166 fused_ordering(58) 00:14:09.166 fused_ordering(59) 00:14:09.166 fused_ordering(60) 00:14:09.166 fused_ordering(61) 00:14:09.166 fused_ordering(62) 00:14:09.166 fused_ordering(63) 00:14:09.166 fused_ordering(64) 00:14:09.166 fused_ordering(65) 00:14:09.166 fused_ordering(66) 00:14:09.166 fused_ordering(67) 00:14:09.166 fused_ordering(68) 00:14:09.166 fused_ordering(69) 00:14:09.166 fused_ordering(70) 00:14:09.166 fused_ordering(71) 00:14:09.166 fused_ordering(72) 00:14:09.166 fused_ordering(73) 00:14:09.166 fused_ordering(74) 00:14:09.166 fused_ordering(75) 00:14:09.166 fused_ordering(76) 00:14:09.166 fused_ordering(77) 00:14:09.166 fused_ordering(78) 00:14:09.166 fused_ordering(79) 00:14:09.166 fused_ordering(80) 00:14:09.166 fused_ordering(81) 00:14:09.166 fused_ordering(82) 00:14:09.166 fused_ordering(83) 00:14:09.166 fused_ordering(84) 00:14:09.166 fused_ordering(85) 00:14:09.166 fused_ordering(86) 00:14:09.166 fused_ordering(87) 00:14:09.166 fused_ordering(88) 00:14:09.166 fused_ordering(89) 00:14:09.166 fused_ordering(90) 00:14:09.166 fused_ordering(91) 00:14:09.166 fused_ordering(92) 00:14:09.166 fused_ordering(93) 00:14:09.166 fused_ordering(94) 00:14:09.166 fused_ordering(95) 00:14:09.166 fused_ordering(96) 00:14:09.166 fused_ordering(97) 00:14:09.166 fused_ordering(98) 00:14:09.166 fused_ordering(99) 00:14:09.166 fused_ordering(100) 00:14:09.166 fused_ordering(101) 00:14:09.166 fused_ordering(102) 00:14:09.166 fused_ordering(103) 00:14:09.166 fused_ordering(104) 00:14:09.166 fused_ordering(105) 00:14:09.166 fused_ordering(106) 00:14:09.166 fused_ordering(107) 00:14:09.166 fused_ordering(108) 00:14:09.166 fused_ordering(109) 00:14:09.166 fused_ordering(110) 00:14:09.166 fused_ordering(111) 00:14:09.166 fused_ordering(112) 00:14:09.166 fused_ordering(113) 00:14:09.166 fused_ordering(114) 00:14:09.166 fused_ordering(115) 00:14:09.166 fused_ordering(116) 00:14:09.166 fused_ordering(117) 00:14:09.166 fused_ordering(118) 00:14:09.166 fused_ordering(119) 00:14:09.166 fused_ordering(120) 00:14:09.166 fused_ordering(121) 00:14:09.166 fused_ordering(122) 00:14:09.166 fused_ordering(123) 00:14:09.166 fused_ordering(124) 00:14:09.166 fused_ordering(125) 00:14:09.166 fused_ordering(126) 00:14:09.166 fused_ordering(127) 00:14:09.166 fused_ordering(128) 00:14:09.166 fused_ordering(129) 00:14:09.166 fused_ordering(130) 00:14:09.166 fused_ordering(131) 00:14:09.166 fused_ordering(132) 00:14:09.166 fused_ordering(133) 00:14:09.166 fused_ordering(134) 00:14:09.166 fused_ordering(135) 00:14:09.166 fused_ordering(136) 00:14:09.166 fused_ordering(137) 00:14:09.166 fused_ordering(138) 00:14:09.166 fused_ordering(139) 00:14:09.166 fused_ordering(140) 00:14:09.166 fused_ordering(141) 00:14:09.166 fused_ordering(142) 00:14:09.166 fused_ordering(143) 00:14:09.166 fused_ordering(144) 00:14:09.166 fused_ordering(145) 00:14:09.166 fused_ordering(146) 00:14:09.166 fused_ordering(147) 00:14:09.166 fused_ordering(148) 00:14:09.166 fused_ordering(149) 00:14:09.166 fused_ordering(150) 00:14:09.166 fused_ordering(151) 00:14:09.166 fused_ordering(152) 00:14:09.166 fused_ordering(153) 00:14:09.166 fused_ordering(154) 00:14:09.166 fused_ordering(155) 00:14:09.166 fused_ordering(156) 00:14:09.166 fused_ordering(157) 00:14:09.166 fused_ordering(158) 00:14:09.166 fused_ordering(159) 00:14:09.166 fused_ordering(160) 00:14:09.166 fused_ordering(161) 00:14:09.166 fused_ordering(162) 00:14:09.166 fused_ordering(163) 00:14:09.166 fused_ordering(164) 00:14:09.166 fused_ordering(165) 00:14:09.166 fused_ordering(166) 00:14:09.166 fused_ordering(167) 00:14:09.166 fused_ordering(168) 00:14:09.166 fused_ordering(169) 00:14:09.166 fused_ordering(170) 00:14:09.166 fused_ordering(171) 00:14:09.166 fused_ordering(172) 00:14:09.166 fused_ordering(173) 00:14:09.166 fused_ordering(174) 00:14:09.166 fused_ordering(175) 00:14:09.166 fused_ordering(176) 00:14:09.166 fused_ordering(177) 00:14:09.166 fused_ordering(178) 00:14:09.166 fused_ordering(179) 00:14:09.166 fused_ordering(180) 00:14:09.166 fused_ordering(181) 00:14:09.166 fused_ordering(182) 00:14:09.166 fused_ordering(183) 00:14:09.166 fused_ordering(184) 00:14:09.166 fused_ordering(185) 00:14:09.166 fused_ordering(186) 00:14:09.166 fused_ordering(187) 00:14:09.166 fused_ordering(188) 00:14:09.166 fused_ordering(189) 00:14:09.166 fused_ordering(190) 00:14:09.166 fused_ordering(191) 00:14:09.166 fused_ordering(192) 00:14:09.166 fused_ordering(193) 00:14:09.166 fused_ordering(194) 00:14:09.166 fused_ordering(195) 00:14:09.166 fused_ordering(196) 00:14:09.166 fused_ordering(197) 00:14:09.166 fused_ordering(198) 00:14:09.166 fused_ordering(199) 00:14:09.166 fused_ordering(200) 00:14:09.166 fused_ordering(201) 00:14:09.166 fused_ordering(202) 00:14:09.166 fused_ordering(203) 00:14:09.166 fused_ordering(204) 00:14:09.166 fused_ordering(205) 00:14:09.428 fused_ordering(206) 00:14:09.428 fused_ordering(207) 00:14:09.428 fused_ordering(208) 00:14:09.428 fused_ordering(209) 00:14:09.428 fused_ordering(210) 00:14:09.428 fused_ordering(211) 00:14:09.428 fused_ordering(212) 00:14:09.428 fused_ordering(213) 00:14:09.428 fused_ordering(214) 00:14:09.428 fused_ordering(215) 00:14:09.428 fused_ordering(216) 00:14:09.428 fused_ordering(217) 00:14:09.428 fused_ordering(218) 00:14:09.428 fused_ordering(219) 00:14:09.428 fused_ordering(220) 00:14:09.428 fused_ordering(221) 00:14:09.428 fused_ordering(222) 00:14:09.428 fused_ordering(223) 00:14:09.428 fused_ordering(224) 00:14:09.428 fused_ordering(225) 00:14:09.428 fused_ordering(226) 00:14:09.428 fused_ordering(227) 00:14:09.428 fused_ordering(228) 00:14:09.428 fused_ordering(229) 00:14:09.428 fused_ordering(230) 00:14:09.428 fused_ordering(231) 00:14:09.428 fused_ordering(232) 00:14:09.428 fused_ordering(233) 00:14:09.428 fused_ordering(234) 00:14:09.428 fused_ordering(235) 00:14:09.428 fused_ordering(236) 00:14:09.428 fused_ordering(237) 00:14:09.428 fused_ordering(238) 00:14:09.428 fused_ordering(239) 00:14:09.428 fused_ordering(240) 00:14:09.428 fused_ordering(241) 00:14:09.428 fused_ordering(242) 00:14:09.428 fused_ordering(243) 00:14:09.428 fused_ordering(244) 00:14:09.428 fused_ordering(245) 00:14:09.428 fused_ordering(246) 00:14:09.428 fused_ordering(247) 00:14:09.428 fused_ordering(248) 00:14:09.428 fused_ordering(249) 00:14:09.428 fused_ordering(250) 00:14:09.428 fused_ordering(251) 00:14:09.428 fused_ordering(252) 00:14:09.428 fused_ordering(253) 00:14:09.428 fused_ordering(254) 00:14:09.428 fused_ordering(255) 00:14:09.428 fused_ordering(256) 00:14:09.428 fused_ordering(257) 00:14:09.428 fused_ordering(258) 00:14:09.428 fused_ordering(259) 00:14:09.428 fused_ordering(260) 00:14:09.428 fused_ordering(261) 00:14:09.428 fused_ordering(262) 00:14:09.428 fused_ordering(263) 00:14:09.428 fused_ordering(264) 00:14:09.428 fused_ordering(265) 00:14:09.428 fused_ordering(266) 00:14:09.428 fused_ordering(267) 00:14:09.428 fused_ordering(268) 00:14:09.428 fused_ordering(269) 00:14:09.428 fused_ordering(270) 00:14:09.428 fused_ordering(271) 00:14:09.428 fused_ordering(272) 00:14:09.428 fused_ordering(273) 00:14:09.428 fused_ordering(274) 00:14:09.428 fused_ordering(275) 00:14:09.428 fused_ordering(276) 00:14:09.428 fused_ordering(277) 00:14:09.428 fused_ordering(278) 00:14:09.428 fused_ordering(279) 00:14:09.428 fused_ordering(280) 00:14:09.428 fused_ordering(281) 00:14:09.428 fused_ordering(282) 00:14:09.428 fused_ordering(283) 00:14:09.428 fused_ordering(284) 00:14:09.428 fused_ordering(285) 00:14:09.428 fused_ordering(286) 00:14:09.428 fused_ordering(287) 00:14:09.428 fused_ordering(288) 00:14:09.428 fused_ordering(289) 00:14:09.428 fused_ordering(290) 00:14:09.428 fused_ordering(291) 00:14:09.428 fused_ordering(292) 00:14:09.428 fused_ordering(293) 00:14:09.428 fused_ordering(294) 00:14:09.428 fused_ordering(295) 00:14:09.428 fused_ordering(296) 00:14:09.428 fused_ordering(297) 00:14:09.428 fused_ordering(298) 00:14:09.428 fused_ordering(299) 00:14:09.428 fused_ordering(300) 00:14:09.428 fused_ordering(301) 00:14:09.428 fused_ordering(302) 00:14:09.428 fused_ordering(303) 00:14:09.428 fused_ordering(304) 00:14:09.428 fused_ordering(305) 00:14:09.428 fused_ordering(306) 00:14:09.428 fused_ordering(307) 00:14:09.428 fused_ordering(308) 00:14:09.428 fused_ordering(309) 00:14:09.428 fused_ordering(310) 00:14:09.428 fused_ordering(311) 00:14:09.428 fused_ordering(312) 00:14:09.428 fused_ordering(313) 00:14:09.428 fused_ordering(314) 00:14:09.428 fused_ordering(315) 00:14:09.428 fused_ordering(316) 00:14:09.428 fused_ordering(317) 00:14:09.428 fused_ordering(318) 00:14:09.428 fused_ordering(319) 00:14:09.428 fused_ordering(320) 00:14:09.428 fused_ordering(321) 00:14:09.428 fused_ordering(322) 00:14:09.428 fused_ordering(323) 00:14:09.428 fused_ordering(324) 00:14:09.428 fused_ordering(325) 00:14:09.428 fused_ordering(326) 00:14:09.428 fused_ordering(327) 00:14:09.428 fused_ordering(328) 00:14:09.428 fused_ordering(329) 00:14:09.428 fused_ordering(330) 00:14:09.428 fused_ordering(331) 00:14:09.428 fused_ordering(332) 00:14:09.428 fused_ordering(333) 00:14:09.428 fused_ordering(334) 00:14:09.428 fused_ordering(335) 00:14:09.428 fused_ordering(336) 00:14:09.428 fused_ordering(337) 00:14:09.428 fused_ordering(338) 00:14:09.428 fused_ordering(339) 00:14:09.428 fused_ordering(340) 00:14:09.428 fused_ordering(341) 00:14:09.428 fused_ordering(342) 00:14:09.428 fused_ordering(343) 00:14:09.428 fused_ordering(344) 00:14:09.428 fused_ordering(345) 00:14:09.428 fused_ordering(346) 00:14:09.428 fused_ordering(347) 00:14:09.429 fused_ordering(348) 00:14:09.429 fused_ordering(349) 00:14:09.429 fused_ordering(350) 00:14:09.429 fused_ordering(351) 00:14:09.429 fused_ordering(352) 00:14:09.429 fused_ordering(353) 00:14:09.429 fused_ordering(354) 00:14:09.429 fused_ordering(355) 00:14:09.429 fused_ordering(356) 00:14:09.429 fused_ordering(357) 00:14:09.429 fused_ordering(358) 00:14:09.429 fused_ordering(359) 00:14:09.429 fused_ordering(360) 00:14:09.429 fused_ordering(361) 00:14:09.429 fused_ordering(362) 00:14:09.429 fused_ordering(363) 00:14:09.429 fused_ordering(364) 00:14:09.429 fused_ordering(365) 00:14:09.429 fused_ordering(366) 00:14:09.429 fused_ordering(367) 00:14:09.429 fused_ordering(368) 00:14:09.429 fused_ordering(369) 00:14:09.429 fused_ordering(370) 00:14:09.429 fused_ordering(371) 00:14:09.429 fused_ordering(372) 00:14:09.429 fused_ordering(373) 00:14:09.429 fused_ordering(374) 00:14:09.429 fused_ordering(375) 00:14:09.429 fused_ordering(376) 00:14:09.429 fused_ordering(377) 00:14:09.429 fused_ordering(378) 00:14:09.429 fused_ordering(379) 00:14:09.429 fused_ordering(380) 00:14:09.429 fused_ordering(381) 00:14:09.429 fused_ordering(382) 00:14:09.429 fused_ordering(383) 00:14:09.429 fused_ordering(384) 00:14:09.429 fused_ordering(385) 00:14:09.429 fused_ordering(386) 00:14:09.429 fused_ordering(387) 00:14:09.429 fused_ordering(388) 00:14:09.429 fused_ordering(389) 00:14:09.429 fused_ordering(390) 00:14:09.429 fused_ordering(391) 00:14:09.429 fused_ordering(392) 00:14:09.429 fused_ordering(393) 00:14:09.429 fused_ordering(394) 00:14:09.429 fused_ordering(395) 00:14:09.429 fused_ordering(396) 00:14:09.429 fused_ordering(397) 00:14:09.429 fused_ordering(398) 00:14:09.429 fused_ordering(399) 00:14:09.429 fused_ordering(400) 00:14:09.429 fused_ordering(401) 00:14:09.429 fused_ordering(402) 00:14:09.429 fused_ordering(403) 00:14:09.429 fused_ordering(404) 00:14:09.429 fused_ordering(405) 00:14:09.429 fused_ordering(406) 00:14:09.429 fused_ordering(407) 00:14:09.429 fused_ordering(408) 00:14:09.429 fused_ordering(409) 00:14:09.429 fused_ordering(410) 00:14:09.690 fused_ordering(411) 00:14:09.690 fused_ordering(412) 00:14:09.690 fused_ordering(413) 00:14:09.690 fused_ordering(414) 00:14:09.690 fused_ordering(415) 00:14:09.690 fused_ordering(416) 00:14:09.690 fused_ordering(417) 00:14:09.690 fused_ordering(418) 00:14:09.690 fused_ordering(419) 00:14:09.690 fused_ordering(420) 00:14:09.690 fused_ordering(421) 00:14:09.690 fused_ordering(422) 00:14:09.690 fused_ordering(423) 00:14:09.690 fused_ordering(424) 00:14:09.690 fused_ordering(425) 00:14:09.690 fused_ordering(426) 00:14:09.690 fused_ordering(427) 00:14:09.690 fused_ordering(428) 00:14:09.690 fused_ordering(429) 00:14:09.690 fused_ordering(430) 00:14:09.690 fused_ordering(431) 00:14:09.690 fused_ordering(432) 00:14:09.690 fused_ordering(433) 00:14:09.690 fused_ordering(434) 00:14:09.690 fused_ordering(435) 00:14:09.690 fused_ordering(436) 00:14:09.690 fused_ordering(437) 00:14:09.690 fused_ordering(438) 00:14:09.690 fused_ordering(439) 00:14:09.691 fused_ordering(440) 00:14:09.691 fused_ordering(441) 00:14:09.691 fused_ordering(442) 00:14:09.691 fused_ordering(443) 00:14:09.691 fused_ordering(444) 00:14:09.691 fused_ordering(445) 00:14:09.691 fused_ordering(446) 00:14:09.691 fused_ordering(447) 00:14:09.691 fused_ordering(448) 00:14:09.691 fused_ordering(449) 00:14:09.691 fused_ordering(450) 00:14:09.691 fused_ordering(451) 00:14:09.691 fused_ordering(452) 00:14:09.691 fused_ordering(453) 00:14:09.691 fused_ordering(454) 00:14:09.691 fused_ordering(455) 00:14:09.691 fused_ordering(456) 00:14:09.691 fused_ordering(457) 00:14:09.691 fused_ordering(458) 00:14:09.691 fused_ordering(459) 00:14:09.691 fused_ordering(460) 00:14:09.691 fused_ordering(461) 00:14:09.691 fused_ordering(462) 00:14:09.691 fused_ordering(463) 00:14:09.691 fused_ordering(464) 00:14:09.691 fused_ordering(465) 00:14:09.691 fused_ordering(466) 00:14:09.691 fused_ordering(467) 00:14:09.691 fused_ordering(468) 00:14:09.691 fused_ordering(469) 00:14:09.691 fused_ordering(470) 00:14:09.691 fused_ordering(471) 00:14:09.691 fused_ordering(472) 00:14:09.691 fused_ordering(473) 00:14:09.691 fused_ordering(474) 00:14:09.691 fused_ordering(475) 00:14:09.691 fused_ordering(476) 00:14:09.691 fused_ordering(477) 00:14:09.691 fused_ordering(478) 00:14:09.691 fused_ordering(479) 00:14:09.691 fused_ordering(480) 00:14:09.691 fused_ordering(481) 00:14:09.691 fused_ordering(482) 00:14:09.691 fused_ordering(483) 00:14:09.691 fused_ordering(484) 00:14:09.691 fused_ordering(485) 00:14:09.691 fused_ordering(486) 00:14:09.691 fused_ordering(487) 00:14:09.691 fused_ordering(488) 00:14:09.691 fused_ordering(489) 00:14:09.691 fused_ordering(490) 00:14:09.691 fused_ordering(491) 00:14:09.691 fused_ordering(492) 00:14:09.691 fused_ordering(493) 00:14:09.691 fused_ordering(494) 00:14:09.691 fused_ordering(495) 00:14:09.691 fused_ordering(496) 00:14:09.691 fused_ordering(497) 00:14:09.691 fused_ordering(498) 00:14:09.691 fused_ordering(499) 00:14:09.691 fused_ordering(500) 00:14:09.691 fused_ordering(501) 00:14:09.691 fused_ordering(502) 00:14:09.691 fused_ordering(503) 00:14:09.691 fused_ordering(504) 00:14:09.691 fused_ordering(505) 00:14:09.691 fused_ordering(506) 00:14:09.691 fused_ordering(507) 00:14:09.691 fused_ordering(508) 00:14:09.691 fused_ordering(509) 00:14:09.691 fused_ordering(510) 00:14:09.691 fused_ordering(511) 00:14:09.691 fused_ordering(512) 00:14:09.691 fused_ordering(513) 00:14:09.691 fused_ordering(514) 00:14:09.691 fused_ordering(515) 00:14:09.691 fused_ordering(516) 00:14:09.691 fused_ordering(517) 00:14:09.691 fused_ordering(518) 00:14:09.691 fused_ordering(519) 00:14:09.691 fused_ordering(520) 00:14:09.691 fused_ordering(521) 00:14:09.691 fused_ordering(522) 00:14:09.691 fused_ordering(523) 00:14:09.691 fused_ordering(524) 00:14:09.691 fused_ordering(525) 00:14:09.691 fused_ordering(526) 00:14:09.691 fused_ordering(527) 00:14:09.691 fused_ordering(528) 00:14:09.691 fused_ordering(529) 00:14:09.691 fused_ordering(530) 00:14:09.691 fused_ordering(531) 00:14:09.691 fused_ordering(532) 00:14:09.691 fused_ordering(533) 00:14:09.691 fused_ordering(534) 00:14:09.691 fused_ordering(535) 00:14:09.691 fused_ordering(536) 00:14:09.691 fused_ordering(537) 00:14:09.691 fused_ordering(538) 00:14:09.691 fused_ordering(539) 00:14:09.691 fused_ordering(540) 00:14:09.691 fused_ordering(541) 00:14:09.691 fused_ordering(542) 00:14:09.691 fused_ordering(543) 00:14:09.691 fused_ordering(544) 00:14:09.691 fused_ordering(545) 00:14:09.691 fused_ordering(546) 00:14:09.691 fused_ordering(547) 00:14:09.691 fused_ordering(548) 00:14:09.691 fused_ordering(549) 00:14:09.691 fused_ordering(550) 00:14:09.691 fused_ordering(551) 00:14:09.691 fused_ordering(552) 00:14:09.691 fused_ordering(553) 00:14:09.691 fused_ordering(554) 00:14:09.691 fused_ordering(555) 00:14:09.691 fused_ordering(556) 00:14:09.691 fused_ordering(557) 00:14:09.691 fused_ordering(558) 00:14:09.691 fused_ordering(559) 00:14:09.691 fused_ordering(560) 00:14:09.691 fused_ordering(561) 00:14:09.691 fused_ordering(562) 00:14:09.691 fused_ordering(563) 00:14:09.691 fused_ordering(564) 00:14:09.691 fused_ordering(565) 00:14:09.691 fused_ordering(566) 00:14:09.691 fused_ordering(567) 00:14:09.691 fused_ordering(568) 00:14:09.691 fused_ordering(569) 00:14:09.691 fused_ordering(570) 00:14:09.691 fused_ordering(571) 00:14:09.691 fused_ordering(572) 00:14:09.691 fused_ordering(573) 00:14:09.691 fused_ordering(574) 00:14:09.691 fused_ordering(575) 00:14:09.691 fused_ordering(576) 00:14:09.691 fused_ordering(577) 00:14:09.691 fused_ordering(578) 00:14:09.691 fused_ordering(579) 00:14:09.691 fused_ordering(580) 00:14:09.691 fused_ordering(581) 00:14:09.691 fused_ordering(582) 00:14:09.691 fused_ordering(583) 00:14:09.691 fused_ordering(584) 00:14:09.691 fused_ordering(585) 00:14:09.691 fused_ordering(586) 00:14:09.691 fused_ordering(587) 00:14:09.691 fused_ordering(588) 00:14:09.691 fused_ordering(589) 00:14:09.691 fused_ordering(590) 00:14:09.691 fused_ordering(591) 00:14:09.691 fused_ordering(592) 00:14:09.691 fused_ordering(593) 00:14:09.691 fused_ordering(594) 00:14:09.691 fused_ordering(595) 00:14:09.691 fused_ordering(596) 00:14:09.691 fused_ordering(597) 00:14:09.691 fused_ordering(598) 00:14:09.691 fused_ordering(599) 00:14:09.691 fused_ordering(600) 00:14:09.691 fused_ordering(601) 00:14:09.691 fused_ordering(602) 00:14:09.691 fused_ordering(603) 00:14:09.691 fused_ordering(604) 00:14:09.691 fused_ordering(605) 00:14:09.691 fused_ordering(606) 00:14:09.691 fused_ordering(607) 00:14:09.691 fused_ordering(608) 00:14:09.691 fused_ordering(609) 00:14:09.691 fused_ordering(610) 00:14:09.691 fused_ordering(611) 00:14:09.691 fused_ordering(612) 00:14:09.691 fused_ordering(613) 00:14:09.691 fused_ordering(614) 00:14:09.691 fused_ordering(615) 00:14:10.262 fused_ordering(616) 00:14:10.262 fused_ordering(617) 00:14:10.262 fused_ordering(618) 00:14:10.262 fused_ordering(619) 00:14:10.262 fused_ordering(620) 00:14:10.262 fused_ordering(621) 00:14:10.262 fused_ordering(622) 00:14:10.262 fused_ordering(623) 00:14:10.262 fused_ordering(624) 00:14:10.262 fused_ordering(625) 00:14:10.262 fused_ordering(626) 00:14:10.262 fused_ordering(627) 00:14:10.262 fused_ordering(628) 00:14:10.262 fused_ordering(629) 00:14:10.262 fused_ordering(630) 00:14:10.262 fused_ordering(631) 00:14:10.262 fused_ordering(632) 00:14:10.262 fused_ordering(633) 00:14:10.262 fused_ordering(634) 00:14:10.262 fused_ordering(635) 00:14:10.262 fused_ordering(636) 00:14:10.262 fused_ordering(637) 00:14:10.262 fused_ordering(638) 00:14:10.262 fused_ordering(639) 00:14:10.262 fused_ordering(640) 00:14:10.262 fused_ordering(641) 00:14:10.262 fused_ordering(642) 00:14:10.262 fused_ordering(643) 00:14:10.262 fused_ordering(644) 00:14:10.262 fused_ordering(645) 00:14:10.262 fused_ordering(646) 00:14:10.262 fused_ordering(647) 00:14:10.262 fused_ordering(648) 00:14:10.262 fused_ordering(649) 00:14:10.262 fused_ordering(650) 00:14:10.262 fused_ordering(651) 00:14:10.262 fused_ordering(652) 00:14:10.262 fused_ordering(653) 00:14:10.262 fused_ordering(654) 00:14:10.262 fused_ordering(655) 00:14:10.262 fused_ordering(656) 00:14:10.262 fused_ordering(657) 00:14:10.262 fused_ordering(658) 00:14:10.262 fused_ordering(659) 00:14:10.262 fused_ordering(660) 00:14:10.262 fused_ordering(661) 00:14:10.262 fused_ordering(662) 00:14:10.262 fused_ordering(663) 00:14:10.262 fused_ordering(664) 00:14:10.262 fused_ordering(665) 00:14:10.262 fused_ordering(666) 00:14:10.262 fused_ordering(667) 00:14:10.262 fused_ordering(668) 00:14:10.262 fused_ordering(669) 00:14:10.262 fused_ordering(670) 00:14:10.262 fused_ordering(671) 00:14:10.262 fused_ordering(672) 00:14:10.262 fused_ordering(673) 00:14:10.262 fused_ordering(674) 00:14:10.262 fused_ordering(675) 00:14:10.262 fused_ordering(676) 00:14:10.262 fused_ordering(677) 00:14:10.262 fused_ordering(678) 00:14:10.262 fused_ordering(679) 00:14:10.262 fused_ordering(680) 00:14:10.262 fused_ordering(681) 00:14:10.262 fused_ordering(682) 00:14:10.262 fused_ordering(683) 00:14:10.262 fused_ordering(684) 00:14:10.262 fused_ordering(685) 00:14:10.262 fused_ordering(686) 00:14:10.262 fused_ordering(687) 00:14:10.262 fused_ordering(688) 00:14:10.262 fused_ordering(689) 00:14:10.262 fused_ordering(690) 00:14:10.262 fused_ordering(691) 00:14:10.262 fused_ordering(692) 00:14:10.262 fused_ordering(693) 00:14:10.262 fused_ordering(694) 00:14:10.262 fused_ordering(695) 00:14:10.262 fused_ordering(696) 00:14:10.262 fused_ordering(697) 00:14:10.262 fused_ordering(698) 00:14:10.262 fused_ordering(699) 00:14:10.262 fused_ordering(700) 00:14:10.262 fused_ordering(701) 00:14:10.262 fused_ordering(702) 00:14:10.262 fused_ordering(703) 00:14:10.262 fused_ordering(704) 00:14:10.262 fused_ordering(705) 00:14:10.262 fused_ordering(706) 00:14:10.262 fused_ordering(707) 00:14:10.262 fused_ordering(708) 00:14:10.262 fused_ordering(709) 00:14:10.262 fused_ordering(710) 00:14:10.262 fused_ordering(711) 00:14:10.262 fused_ordering(712) 00:14:10.262 fused_ordering(713) 00:14:10.262 fused_ordering(714) 00:14:10.262 fused_ordering(715) 00:14:10.262 fused_ordering(716) 00:14:10.262 fused_ordering(717) 00:14:10.262 fused_ordering(718) 00:14:10.262 fused_ordering(719) 00:14:10.262 fused_ordering(720) 00:14:10.262 fused_ordering(721) 00:14:10.262 fused_ordering(722) 00:14:10.262 fused_ordering(723) 00:14:10.262 fused_ordering(724) 00:14:10.262 fused_ordering(725) 00:14:10.262 fused_ordering(726) 00:14:10.262 fused_ordering(727) 00:14:10.262 fused_ordering(728) 00:14:10.262 fused_ordering(729) 00:14:10.262 fused_ordering(730) 00:14:10.262 fused_ordering(731) 00:14:10.262 fused_ordering(732) 00:14:10.262 fused_ordering(733) 00:14:10.262 fused_ordering(734) 00:14:10.262 fused_ordering(735) 00:14:10.262 fused_ordering(736) 00:14:10.262 fused_ordering(737) 00:14:10.262 fused_ordering(738) 00:14:10.262 fused_ordering(739) 00:14:10.262 fused_ordering(740) 00:14:10.262 fused_ordering(741) 00:14:10.262 fused_ordering(742) 00:14:10.262 fused_ordering(743) 00:14:10.262 fused_ordering(744) 00:14:10.262 fused_ordering(745) 00:14:10.262 fused_ordering(746) 00:14:10.262 fused_ordering(747) 00:14:10.262 fused_ordering(748) 00:14:10.262 fused_ordering(749) 00:14:10.262 fused_ordering(750) 00:14:10.262 fused_ordering(751) 00:14:10.262 fused_ordering(752) 00:14:10.262 fused_ordering(753) 00:14:10.262 fused_ordering(754) 00:14:10.262 fused_ordering(755) 00:14:10.262 fused_ordering(756) 00:14:10.262 fused_ordering(757) 00:14:10.262 fused_ordering(758) 00:14:10.262 fused_ordering(759) 00:14:10.262 fused_ordering(760) 00:14:10.262 fused_ordering(761) 00:14:10.262 fused_ordering(762) 00:14:10.262 fused_ordering(763) 00:14:10.262 fused_ordering(764) 00:14:10.262 fused_ordering(765) 00:14:10.262 fused_ordering(766) 00:14:10.262 fused_ordering(767) 00:14:10.262 fused_ordering(768) 00:14:10.262 fused_ordering(769) 00:14:10.262 fused_ordering(770) 00:14:10.262 fused_ordering(771) 00:14:10.262 fused_ordering(772) 00:14:10.262 fused_ordering(773) 00:14:10.262 fused_ordering(774) 00:14:10.262 fused_ordering(775) 00:14:10.262 fused_ordering(776) 00:14:10.262 fused_ordering(777) 00:14:10.262 fused_ordering(778) 00:14:10.262 fused_ordering(779) 00:14:10.263 fused_ordering(780) 00:14:10.263 fused_ordering(781) 00:14:10.263 fused_ordering(782) 00:14:10.263 fused_ordering(783) 00:14:10.263 fused_ordering(784) 00:14:10.263 fused_ordering(785) 00:14:10.263 fused_ordering(786) 00:14:10.263 fused_ordering(787) 00:14:10.263 fused_ordering(788) 00:14:10.263 fused_ordering(789) 00:14:10.263 fused_ordering(790) 00:14:10.263 fused_ordering(791) 00:14:10.263 fused_ordering(792) 00:14:10.263 fused_ordering(793) 00:14:10.263 fused_ordering(794) 00:14:10.263 fused_ordering(795) 00:14:10.263 fused_ordering(796) 00:14:10.263 fused_ordering(797) 00:14:10.263 fused_ordering(798) 00:14:10.263 fused_ordering(799) 00:14:10.263 fused_ordering(800) 00:14:10.263 fused_ordering(801) 00:14:10.263 fused_ordering(802) 00:14:10.263 fused_ordering(803) 00:14:10.263 fused_ordering(804) 00:14:10.263 fused_ordering(805) 00:14:10.263 fused_ordering(806) 00:14:10.263 fused_ordering(807) 00:14:10.263 fused_ordering(808) 00:14:10.263 fused_ordering(809) 00:14:10.263 fused_ordering(810) 00:14:10.263 fused_ordering(811) 00:14:10.263 fused_ordering(812) 00:14:10.263 fused_ordering(813) 00:14:10.263 fused_ordering(814) 00:14:10.263 fused_ordering(815) 00:14:10.263 fused_ordering(816) 00:14:10.263 fused_ordering(817) 00:14:10.263 fused_ordering(818) 00:14:10.263 fused_ordering(819) 00:14:10.263 fused_ordering(820) 00:14:10.834 fused_ordering(821) 00:14:10.834 fused_ordering(822) 00:14:10.834 fused_ordering(823) 00:14:10.834 fused_ordering(824) 00:14:10.834 fused_ordering(825) 00:14:10.834 fused_ordering(826) 00:14:10.834 fused_ordering(827) 00:14:10.834 fused_ordering(828) 00:14:10.834 fused_ordering(829) 00:14:10.834 fused_ordering(830) 00:14:10.834 fused_ordering(831) 00:14:10.834 fused_ordering(832) 00:14:10.834 fused_ordering(833) 00:14:10.834 fused_ordering(834) 00:14:10.834 fused_ordering(835) 00:14:10.834 fused_ordering(836) 00:14:10.834 fused_ordering(837) 00:14:10.834 fused_ordering(838) 00:14:10.834 fused_ordering(839) 00:14:10.834 fused_ordering(840) 00:14:10.834 fused_ordering(841) 00:14:10.834 fused_ordering(842) 00:14:10.834 fused_ordering(843) 00:14:10.834 fused_ordering(844) 00:14:10.834 fused_ordering(845) 00:14:10.834 fused_ordering(846) 00:14:10.834 fused_ordering(847) 00:14:10.834 fused_ordering(848) 00:14:10.834 fused_ordering(849) 00:14:10.834 fused_ordering(850) 00:14:10.834 fused_ordering(851) 00:14:10.834 fused_ordering(852) 00:14:10.834 fused_ordering(853) 00:14:10.834 fused_ordering(854) 00:14:10.834 fused_ordering(855) 00:14:10.834 fused_ordering(856) 00:14:10.834 fused_ordering(857) 00:14:10.834 fused_ordering(858) 00:14:10.834 fused_ordering(859) 00:14:10.834 fused_ordering(860) 00:14:10.834 fused_ordering(861) 00:14:10.834 fused_ordering(862) 00:14:10.834 fused_ordering(863) 00:14:10.834 fused_ordering(864) 00:14:10.834 fused_ordering(865) 00:14:10.834 fused_ordering(866) 00:14:10.834 fused_ordering(867) 00:14:10.834 fused_ordering(868) 00:14:10.834 fused_ordering(869) 00:14:10.834 fused_ordering(870) 00:14:10.834 fused_ordering(871) 00:14:10.834 fused_ordering(872) 00:14:10.834 fused_ordering(873) 00:14:10.834 fused_ordering(874) 00:14:10.834 fused_ordering(875) 00:14:10.834 fused_ordering(876) 00:14:10.834 fused_ordering(877) 00:14:10.834 fused_ordering(878) 00:14:10.834 fused_ordering(879) 00:14:10.834 fused_ordering(880) 00:14:10.834 fused_ordering(881) 00:14:10.834 fused_ordering(882) 00:14:10.834 fused_ordering(883) 00:14:10.834 fused_ordering(884) 00:14:10.834 fused_ordering(885) 00:14:10.834 fused_ordering(886) 00:14:10.834 fused_ordering(887) 00:14:10.834 fused_ordering(888) 00:14:10.834 fused_ordering(889) 00:14:10.834 fused_ordering(890) 00:14:10.834 fused_ordering(891) 00:14:10.834 fused_ordering(892) 00:14:10.834 fused_ordering(893) 00:14:10.834 fused_ordering(894) 00:14:10.834 fused_ordering(895) 00:14:10.834 fused_ordering(896) 00:14:10.834 fused_ordering(897) 00:14:10.834 fused_ordering(898) 00:14:10.834 fused_ordering(899) 00:14:10.834 fused_ordering(900) 00:14:10.834 fused_ordering(901) 00:14:10.834 fused_ordering(902) 00:14:10.834 fused_ordering(903) 00:14:10.834 fused_ordering(904) 00:14:10.834 fused_ordering(905) 00:14:10.834 fused_ordering(906) 00:14:10.834 fused_ordering(907) 00:14:10.834 fused_ordering(908) 00:14:10.834 fused_ordering(909) 00:14:10.834 fused_ordering(910) 00:14:10.834 fused_ordering(911) 00:14:10.834 fused_ordering(912) 00:14:10.834 fused_ordering(913) 00:14:10.834 fused_ordering(914) 00:14:10.834 fused_ordering(915) 00:14:10.834 fused_ordering(916) 00:14:10.834 fused_ordering(917) 00:14:10.834 fused_ordering(918) 00:14:10.834 fused_ordering(919) 00:14:10.834 fused_ordering(920) 00:14:10.834 fused_ordering(921) 00:14:10.834 fused_ordering(922) 00:14:10.834 fused_ordering(923) 00:14:10.834 fused_ordering(924) 00:14:10.834 fused_ordering(925) 00:14:10.834 fused_ordering(926) 00:14:10.834 fused_ordering(927) 00:14:10.834 fused_ordering(928) 00:14:10.834 fused_ordering(929) 00:14:10.834 fused_ordering(930) 00:14:10.834 fused_ordering(931) 00:14:10.834 fused_ordering(932) 00:14:10.834 fused_ordering(933) 00:14:10.834 fused_ordering(934) 00:14:10.834 fused_ordering(935) 00:14:10.834 fused_ordering(936) 00:14:10.834 fused_ordering(937) 00:14:10.834 fused_ordering(938) 00:14:10.834 fused_ordering(939) 00:14:10.834 fused_ordering(940) 00:14:10.834 fused_ordering(941) 00:14:10.834 fused_ordering(942) 00:14:10.834 fused_ordering(943) 00:14:10.834 fused_ordering(944) 00:14:10.834 fused_ordering(945) 00:14:10.834 fused_ordering(946) 00:14:10.834 fused_ordering(947) 00:14:10.834 fused_ordering(948) 00:14:10.834 fused_ordering(949) 00:14:10.834 fused_ordering(950) 00:14:10.834 fused_ordering(951) 00:14:10.834 fused_ordering(952) 00:14:10.834 fused_ordering(953) 00:14:10.834 fused_ordering(954) 00:14:10.834 fused_ordering(955) 00:14:10.834 fused_ordering(956) 00:14:10.834 fused_ordering(957) 00:14:10.834 fused_ordering(958) 00:14:10.834 fused_ordering(959) 00:14:10.834 fused_ordering(960) 00:14:10.834 fused_ordering(961) 00:14:10.834 fused_ordering(962) 00:14:10.834 fused_ordering(963) 00:14:10.834 fused_ordering(964) 00:14:10.834 fused_ordering(965) 00:14:10.834 fused_ordering(966) 00:14:10.834 fused_ordering(967) 00:14:10.834 fused_ordering(968) 00:14:10.834 fused_ordering(969) 00:14:10.834 fused_ordering(970) 00:14:10.834 fused_ordering(971) 00:14:10.834 fused_ordering(972) 00:14:10.834 fused_ordering(973) 00:14:10.834 fused_ordering(974) 00:14:10.834 fused_ordering(975) 00:14:10.834 fused_ordering(976) 00:14:10.834 fused_ordering(977) 00:14:10.834 fused_ordering(978) 00:14:10.834 fused_ordering(979) 00:14:10.834 fused_ordering(980) 00:14:10.834 fused_ordering(981) 00:14:10.834 fused_ordering(982) 00:14:10.834 fused_ordering(983) 00:14:10.834 fused_ordering(984) 00:14:10.834 fused_ordering(985) 00:14:10.834 fused_ordering(986) 00:14:10.834 fused_ordering(987) 00:14:10.834 fused_ordering(988) 00:14:10.834 fused_ordering(989) 00:14:10.834 fused_ordering(990) 00:14:10.834 fused_ordering(991) 00:14:10.834 fused_ordering(992) 00:14:10.834 fused_ordering(993) 00:14:10.834 fused_ordering(994) 00:14:10.834 fused_ordering(995) 00:14:10.834 fused_ordering(996) 00:14:10.834 fused_ordering(997) 00:14:10.834 fused_ordering(998) 00:14:10.834 fused_ordering(999) 00:14:10.834 fused_ordering(1000) 00:14:10.834 fused_ordering(1001) 00:14:10.834 fused_ordering(1002) 00:14:10.834 fused_ordering(1003) 00:14:10.834 fused_ordering(1004) 00:14:10.834 fused_ordering(1005) 00:14:10.834 fused_ordering(1006) 00:14:10.834 fused_ordering(1007) 00:14:10.834 fused_ordering(1008) 00:14:10.834 fused_ordering(1009) 00:14:10.834 fused_ordering(1010) 00:14:10.834 fused_ordering(1011) 00:14:10.834 fused_ordering(1012) 00:14:10.834 fused_ordering(1013) 00:14:10.835 fused_ordering(1014) 00:14:10.835 fused_ordering(1015) 00:14:10.835 fused_ordering(1016) 00:14:10.835 fused_ordering(1017) 00:14:10.835 fused_ordering(1018) 00:14:10.835 fused_ordering(1019) 00:14:10.835 fused_ordering(1020) 00:14:10.835 fused_ordering(1021) 00:14:10.835 fused_ordering(1022) 00:14:10.835 fused_ordering(1023) 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:10.835 rmmod nvme_tcp 00:14:10.835 rmmod nvme_fabrics 00:14:10.835 rmmod nvme_keyring 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1215466 ']' 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1215466 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 1215466 ']' 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 1215466 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1215466 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1215466' 00:14:10.835 killing process with pid 1215466 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 1215466 00:14:10.835 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 1215466 00:14:11.097 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:11.097 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:11.097 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:11.097 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:11.097 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:11.097 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:11.097 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:11.097 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:11.097 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:11.097 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.097 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.097 07:14:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.012 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:13.274 00:14:13.274 real 0m14.476s 00:14:13.274 user 0m7.422s 00:14:13.274 sys 0m7.842s 00:14:13.274 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:13.274 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.274 ************************************ 00:14:13.274 END TEST nvmf_fused_ordering 00:14:13.274 ************************************ 00:14:13.274 07:14:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:13.274 07:14:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:13.274 07:14:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:13.274 07:14:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:13.274 ************************************ 00:14:13.274 START TEST nvmf_ns_masking 00:14:13.274 ************************************ 00:14:13.274 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:13.274 * Looking for test storage... 00:14:13.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.274 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:13.274 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:13.274 07:14:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:13.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.536 --rc genhtml_branch_coverage=1 00:14:13.536 --rc genhtml_function_coverage=1 00:14:13.536 --rc genhtml_legend=1 00:14:13.536 --rc geninfo_all_blocks=1 00:14:13.536 --rc geninfo_unexecuted_blocks=1 00:14:13.536 00:14:13.536 ' 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:13.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.536 --rc genhtml_branch_coverage=1 00:14:13.536 --rc genhtml_function_coverage=1 00:14:13.536 --rc genhtml_legend=1 00:14:13.536 --rc geninfo_all_blocks=1 00:14:13.536 --rc geninfo_unexecuted_blocks=1 00:14:13.536 00:14:13.536 ' 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:13.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.536 --rc genhtml_branch_coverage=1 00:14:13.536 --rc genhtml_function_coverage=1 00:14:13.536 --rc genhtml_legend=1 00:14:13.536 --rc geninfo_all_blocks=1 00:14:13.536 --rc geninfo_unexecuted_blocks=1 00:14:13.536 00:14:13.536 ' 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:13.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.536 --rc genhtml_branch_coverage=1 00:14:13.536 --rc genhtml_function_coverage=1 00:14:13.536 --rc genhtml_legend=1 00:14:13.536 --rc geninfo_all_blocks=1 00:14:13.536 --rc geninfo_unexecuted_blocks=1 00:14:13.536 00:14:13.536 ' 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.536 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:13.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=454bb81e-c06f-4593-a633-51bd7e1621f8 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=e0047144-3226-4f62-ad5e-929d0db294a2 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=97a56b95-64ba-4e4f-a7cc-5ddeafc95c87 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:13.537 07:14:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:21.688 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:21.688 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.688 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:21.689 Found net devices under 0000:31:00.0: cvl_0_0 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:21.689 Found net devices under 0000:31:00.1: cvl_0_1 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:21.689 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:21.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:14:21.689 00:14:21.689 --- 10.0.0.2 ping statistics --- 00:14:21.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.689 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:14:21.689 00:14:21.689 --- 10.0.0.1 ping statistics --- 00:14:21.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.689 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1220840 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1220840 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1220840 ']' 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:21.689 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.689 [2024-11-20 07:14:56.214785] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:14:21.690 [2024-11-20 07:14:56.214853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.690 [2024-11-20 07:14:56.304989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.690 [2024-11-20 07:14:56.344920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.690 [2024-11-20 07:14:56.344956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.690 [2024-11-20 07:14:56.344966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.690 [2024-11-20 07:14:56.344975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.690 [2024-11-20 07:14:56.344982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.690 [2024-11-20 07:14:56.345571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.263 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:22.263 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:22.263 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:22.263 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:22.263 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:22.524 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.524 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:22.524 [2024-11-20 07:14:57.180208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.524 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:22.524 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:22.524 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:22.784 Malloc1 00:14:22.785 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:22.785 Malloc2 00:14:22.785 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:23.045 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:23.305 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.305 [2024-11-20 07:14:57.967649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.305 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:23.305 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 97a56b95-64ba-4e4f-a7cc-5ddeafc95c87 -a 10.0.0.2 -s 4420 -i 4 00:14:23.566 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:23.566 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:23.567 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.567 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:23.567 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:25.488 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:25.488 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.489 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:25.489 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:25.489 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.489 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:25.489 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:25.489 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:25.750 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:25.750 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:25.750 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:25.750 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.750 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.750 [ 0]:0x1 00:14:25.750 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.750 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.750 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c0661c10f57463db238d487d5f99c56 00:14:25.750 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c0661c10f57463db238d487d5f99c56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.750 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.012 [ 0]:0x1 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c0661c10f57463db238d487d5f99c56 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c0661c10f57463db238d487d5f99c56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.012 [ 1]:0x2 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=766b00ebbb5043fe8957eff542c47e12 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 766b00ebbb5043fe8957eff542c47e12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.012 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.273 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:26.534 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:26.534 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 97a56b95-64ba-4e4f-a7cc-5ddeafc95c87 -a 10.0.0.2 -s 4420 -i 4 00:14:26.795 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:26.795 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:26.795 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.795 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:14:26.795 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:14:26.795 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.709 [ 0]:0x2 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.709 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.970 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=766b00ebbb5043fe8957eff542c47e12 00:14:28.970 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 766b00ebbb5043fe8957eff542c47e12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.970 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.970 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:28.970 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.970 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.970 [ 0]:0x1 00:14:28.970 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.970 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c0661c10f57463db238d487d5f99c56 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c0661c10f57463db238d487d5f99c56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:29.231 [ 1]:0x2 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=766b00ebbb5043fe8957eff542c47e12 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 766b00ebbb5043fe8957eff542c47e12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.231 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:29.492 [ 0]:0x2 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=766b00ebbb5043fe8957eff542c47e12 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 766b00ebbb5043fe8957eff542c47e12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.492 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:29.753 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:29.753 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 97a56b95-64ba-4e4f-a7cc-5ddeafc95c87 -a 10.0.0.2 -s 4420 -i 4 00:14:29.753 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:29.753 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:29.753 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:29.753 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:29.753 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:29.753 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:32.298 [ 0]:0x1 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c0661c10f57463db238d487d5f99c56 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c0661c10f57463db238d487d5f99c56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:32.298 [ 1]:0x2 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=766b00ebbb5043fe8957eff542c47e12 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 766b00ebbb5043fe8957eff542c47e12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.298 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:32.298 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:32.298 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:32.298 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:32.298 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:32.298 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.298 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:32.298 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.298 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:32.298 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.298 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:32.298 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.298 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:32.560 [ 0]:0x2 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=766b00ebbb5043fe8957eff542c47e12 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 766b00ebbb5043fe8957eff542c47e12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:32.560 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:32.560 [2024-11-20 07:15:07.310819] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:32.560 request: 00:14:32.560 { 00:14:32.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.560 "nsid": 2, 00:14:32.560 "host": "nqn.2016-06.io.spdk:host1", 00:14:32.560 "method": "nvmf_ns_remove_host", 00:14:32.560 "req_id": 1 00:14:32.560 } 00:14:32.560 Got JSON-RPC error response 00:14:32.560 response: 00:14:32.560 { 00:14:32.560 "code": -32602, 00:14:32.560 "message": "Invalid parameters" 00:14:32.560 } 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:32.822 [ 0]:0x2 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=766b00ebbb5043fe8957eff542c47e12 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 766b00ebbb5043fe8957eff542c47e12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1223458 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1223458 /var/tmp/host.sock 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1223458 ']' 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:32.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:32.822 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:32.822 [2024-11-20 07:15:07.573578] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:14:32.822 [2024-11-20 07:15:07.573630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223458 ] 00:14:33.083 [2024-11-20 07:15:07.666587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.083 [2024-11-20 07:15:07.702525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.725 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:33.725 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:33.725 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.986 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:33.986 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 454bb81e-c06f-4593-a633-51bd7e1621f8 00:14:33.986 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:33.986 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 454BB81EC06F4593A63351BD7E1621F8 -i 00:14:34.247 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid e0047144-3226-4f62-ad5e-929d0db294a2 00:14:34.247 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:34.247 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g E004714432264F62AD5E929D0DB294A2 -i 00:14:34.507 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:34.507 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:34.768 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:34.768 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:35.030 nvme0n1 00:14:35.030 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:35.030 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:35.291 nvme1n2 00:14:35.291 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:35.291 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:35.291 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:35.291 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:35.291 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:35.554 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:35.554 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:35.554 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:35.554 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:35.554 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 454bb81e-c06f-4593-a633-51bd7e1621f8 == \4\5\4\b\b\8\1\e\-\c\0\6\f\-\4\5\9\3\-\a\6\3\3\-\5\1\b\d\7\e\1\6\2\1\f\8 ]] 00:14:35.554 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:35.554 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:35.554 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:35.816 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ e0047144-3226-4f62-ad5e-929d0db294a2 == \e\0\0\4\7\1\4\4\-\3\2\2\6\-\4\f\6\2\-\a\d\5\e\-\9\2\9\d\0\d\b\2\9\4\a\2 ]] 00:14:35.816 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 454bb81e-c06f-4593-a633-51bd7e1621f8 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 454BB81EC06F4593A63351BD7E1621F8 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 454BB81EC06F4593A63351BD7E1621F8 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:36.077 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 454BB81EC06F4593A63351BD7E1621F8 00:14:36.338 [2024-11-20 07:15:10.924860] bdev.c:8477:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:36.338 [2024-11-20 07:15:10.924897] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:36.338 [2024-11-20 07:15:10.924906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.338 request: 00:14:36.338 { 00:14:36.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.338 "namespace": { 00:14:36.338 "bdev_name": "invalid", 00:14:36.338 "nsid": 1, 00:14:36.338 "nguid": "454BB81EC06F4593A63351BD7E1621F8", 00:14:36.338 "no_auto_visible": false 00:14:36.338 }, 00:14:36.338 "method": "nvmf_subsystem_add_ns", 00:14:36.338 "req_id": 1 00:14:36.338 } 00:14:36.338 Got JSON-RPC error response 00:14:36.338 response: 00:14:36.339 { 00:14:36.339 "code": -32602, 00:14:36.339 "message": "Invalid parameters" 00:14:36.339 } 00:14:36.339 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:36.339 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:36.339 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:36.339 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:36.339 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 454bb81e-c06f-4593-a633-51bd7e1621f8 00:14:36.339 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:36.339 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 454BB81EC06F4593A63351BD7E1621F8 -i 00:14:36.600 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:38.518 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:38.518 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:38.518 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:38.778 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:38.779 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1223458 00:14:38.779 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1223458 ']' 00:14:38.779 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1223458 00:14:38.779 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:38.779 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:38.779 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1223458 00:14:38.779 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:38.779 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:38.779 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1223458' 00:14:38.779 killing process with pid 1223458 00:14:38.779 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1223458 00:14:38.779 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1223458 00:14:39.039 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.039 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:39.039 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:39.039 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:39.039 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:39.039 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:39.039 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:39.039 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:39.039 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:39.039 rmmod nvme_tcp 00:14:39.039 rmmod nvme_fabrics 00:14:39.039 rmmod nvme_keyring 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1220840 ']' 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1220840 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1220840 ']' 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1220840 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1220840 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1220840' 00:14:39.300 killing process with pid 1220840 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1220840 00:14:39.300 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1220840 00:14:39.300 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:39.300 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:39.300 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:39.300 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:39.300 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:39.300 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:39.300 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:39.300 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:39.300 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:39.300 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.300 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.300 07:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:41.847 00:14:41.847 real 0m28.257s 00:14:41.847 user 0m30.851s 00:14:41.847 sys 0m8.746s 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:41.847 ************************************ 00:14:41.847 END TEST nvmf_ns_masking 00:14:41.847 ************************************ 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.847 ************************************ 00:14:41.847 START TEST nvmf_nvme_cli 00:14:41.847 ************************************ 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:41.847 * Looking for test storage... 00:14:41.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:41.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.847 --rc genhtml_branch_coverage=1 00:14:41.847 --rc genhtml_function_coverage=1 00:14:41.847 --rc genhtml_legend=1 00:14:41.847 --rc geninfo_all_blocks=1 00:14:41.847 --rc geninfo_unexecuted_blocks=1 00:14:41.847 00:14:41.847 ' 00:14:41.847 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:41.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.847 --rc genhtml_branch_coverage=1 00:14:41.847 --rc genhtml_function_coverage=1 00:14:41.847 --rc genhtml_legend=1 00:14:41.847 --rc geninfo_all_blocks=1 00:14:41.847 --rc geninfo_unexecuted_blocks=1 00:14:41.848 00:14:41.848 ' 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:41.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.848 --rc genhtml_branch_coverage=1 00:14:41.848 --rc genhtml_function_coverage=1 00:14:41.848 --rc genhtml_legend=1 00:14:41.848 --rc geninfo_all_blocks=1 00:14:41.848 --rc geninfo_unexecuted_blocks=1 00:14:41.848 00:14:41.848 ' 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:41.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.848 --rc genhtml_branch_coverage=1 00:14:41.848 --rc genhtml_function_coverage=1 00:14:41.848 --rc genhtml_legend=1 00:14:41.848 --rc geninfo_all_blocks=1 00:14:41.848 --rc geninfo_unexecuted_blocks=1 00:14:41.848 00:14:41.848 ' 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:41.848 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:49.997 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:49.997 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:49.997 Found net devices under 0000:31:00.0: cvl_0_0 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:49.997 Found net devices under 0000:31:00.1: cvl_0_1 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:49.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:14:49.997 00:14:49.997 --- 10.0.0.2 ping statistics --- 00:14:49.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.997 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:14:49.997 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:14:49.997 00:14:49.998 --- 10.0.0.1 ping statistics --- 00:14:49.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.998 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1229876 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1229876 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 1229876 ']' 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:49.998 07:15:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.998 [2024-11-20 07:15:24.758798] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:14:49.998 [2024-11-20 07:15:24.758849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.259 [2024-11-20 07:15:24.846530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.259 [2024-11-20 07:15:24.883935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.260 [2024-11-20 07:15:24.883971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.260 [2024-11-20 07:15:24.883979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.260 [2024-11-20 07:15:24.883986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.260 [2024-11-20 07:15:24.883992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.260 [2024-11-20 07:15:24.885757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.260 [2024-11-20 07:15:24.885774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.260 [2024-11-20 07:15:24.885907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.260 [2024-11-20 07:15:24.885907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.833 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:50.833 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:14:50.833 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:50.833 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:50.833 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 [2024-11-20 07:15:25.612993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 Malloc0 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 Malloc1 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 [2024-11-20 07:15:25.712808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.094 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:14:51.355 00:14:51.355 Discovery Log Number of Records 2, Generation counter 2 00:14:51.355 =====Discovery Log Entry 0====== 00:14:51.355 trtype: tcp 00:14:51.355 adrfam: ipv4 00:14:51.355 subtype: current discovery subsystem 00:14:51.355 treq: not required 00:14:51.355 portid: 0 00:14:51.355 trsvcid: 4420 00:14:51.355 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:51.355 traddr: 10.0.0.2 00:14:51.355 eflags: explicit discovery connections, duplicate discovery information 00:14:51.355 sectype: none 00:14:51.355 =====Discovery Log Entry 1====== 00:14:51.355 trtype: tcp 00:14:51.355 adrfam: ipv4 00:14:51.355 subtype: nvme subsystem 00:14:51.355 treq: not required 00:14:51.355 portid: 0 00:14:51.355 trsvcid: 4420 00:14:51.355 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:51.355 traddr: 10.0.0.2 00:14:51.355 eflags: none 00:14:51.355 sectype: none 00:14:51.355 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:51.355 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:51.355 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:51.355 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:51.355 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:51.355 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:51.355 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:51.355 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:51.355 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:51.355 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:51.355 07:15:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:52.743 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:52.743 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:14:52.743 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.743 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:52.743 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:52.743 07:15:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:14:54.656 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:54.917 /dev/nvme0n2 ]] 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:54.917 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:55.179 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:55.179 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:55.179 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:55.179 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:55.179 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:55.179 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:55.179 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:55.179 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:55.179 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:55.179 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:55.179 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:55.179 07:15:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:55.440 rmmod nvme_tcp 00:14:55.440 rmmod nvme_fabrics 00:14:55.440 rmmod nvme_keyring 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1229876 ']' 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1229876 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 1229876 ']' 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 1229876 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:55.440 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1229876 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1229876' 00:14:55.702 killing process with pid 1229876 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 1229876 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 1229876 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.702 07:15:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:58.248 00:14:58.248 real 0m16.284s 00:14:58.248 user 0m24.360s 00:14:58.248 sys 0m6.954s 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.248 ************************************ 00:14:58.248 END TEST nvmf_nvme_cli 00:14:58.248 ************************************ 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:58.248 ************************************ 00:14:58.248 START TEST nvmf_vfio_user 00:14:58.248 ************************************ 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:58.248 * Looking for test storage... 00:14:58.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:58.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.248 --rc genhtml_branch_coverage=1 00:14:58.248 --rc genhtml_function_coverage=1 00:14:58.248 --rc genhtml_legend=1 00:14:58.248 --rc geninfo_all_blocks=1 00:14:58.248 --rc geninfo_unexecuted_blocks=1 00:14:58.248 00:14:58.248 ' 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:58.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.248 --rc genhtml_branch_coverage=1 00:14:58.248 --rc genhtml_function_coverage=1 00:14:58.248 --rc genhtml_legend=1 00:14:58.248 --rc geninfo_all_blocks=1 00:14:58.248 --rc geninfo_unexecuted_blocks=1 00:14:58.248 00:14:58.248 ' 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:58.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.248 --rc genhtml_branch_coverage=1 00:14:58.248 --rc genhtml_function_coverage=1 00:14:58.248 --rc genhtml_legend=1 00:14:58.248 --rc geninfo_all_blocks=1 00:14:58.248 --rc geninfo_unexecuted_blocks=1 00:14:58.248 00:14:58.248 ' 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:58.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.248 --rc genhtml_branch_coverage=1 00:14:58.248 --rc genhtml_function_coverage=1 00:14:58.248 --rc genhtml_legend=1 00:14:58.248 --rc geninfo_all_blocks=1 00:14:58.248 --rc geninfo_unexecuted_blocks=1 00:14:58.248 00:14:58.248 ' 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:58.248 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:58.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1231481 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1231481' 00:14:58.249 Process pid: 1231481 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1231481 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 1231481 ']' 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:58.249 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:58.249 [2024-11-20 07:15:32.861028] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:14:58.249 [2024-11-20 07:15:32.861104] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.249 [2024-11-20 07:15:32.946180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.249 [2024-11-20 07:15:32.988769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.249 [2024-11-20 07:15:32.988803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.249 [2024-11-20 07:15:32.988812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.249 [2024-11-20 07:15:32.988819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.249 [2024-11-20 07:15:32.988825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.249 [2024-11-20 07:15:32.990671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.249 [2024-11-20 07:15:32.990794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.249 [2024-11-20 07:15:32.990925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.249 [2024-11-20 07:15:32.990925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.193 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:59.193 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:59.193 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:00.136 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:00.136 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:00.136 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:00.136 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:00.136 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:00.136 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:00.397 Malloc1 00:15:00.397 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:00.658 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:00.918 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:00.918 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:00.918 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:00.918 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:01.179 Malloc2 00:15:01.179 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:01.438 07:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:01.439 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:01.700 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:01.700 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:01.700 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:01.700 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:01.700 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:01.700 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:01.700 [2024-11-20 07:15:36.368875] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:15:01.700 [2024-11-20 07:15:36.368913] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232176 ] 00:15:01.700 [2024-11-20 07:15:36.424021] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:01.700 [2024-11-20 07:15:36.426368] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:01.700 [2024-11-20 07:15:36.426391] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc1fe389000 00:15:01.700 [2024-11-20 07:15:36.427359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.700 [2024-11-20 07:15:36.428361] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.700 [2024-11-20 07:15:36.429371] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.700 [2024-11-20 07:15:36.430372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.700 [2024-11-20 07:15:36.431382] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.700 [2024-11-20 07:15:36.432386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.700 [2024-11-20 07:15:36.433394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.700 [2024-11-20 07:15:36.434398] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.700 [2024-11-20 07:15:36.435402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:01.700 [2024-11-20 07:15:36.435412] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc1fe37e000 00:15:01.700 [2024-11-20 07:15:36.436739] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:01.700 [2024-11-20 07:15:36.456033] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:01.700 [2024-11-20 07:15:36.456061] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:01.700 [2024-11-20 07:15:36.461548] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:01.700 [2024-11-20 07:15:36.461595] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:01.700 [2024-11-20 07:15:36.461677] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:01.700 [2024-11-20 07:15:36.461697] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:01.700 [2024-11-20 07:15:36.461703] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:01.700 [2024-11-20 07:15:36.462558] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:01.700 [2024-11-20 07:15:36.462569] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:01.700 [2024-11-20 07:15:36.462576] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:01.700 [2024-11-20 07:15:36.463559] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:01.700 [2024-11-20 07:15:36.463568] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:01.700 [2024-11-20 07:15:36.463576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:01.700 [2024-11-20 07:15:36.464567] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:01.700 [2024-11-20 07:15:36.464576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:01.963 [2024-11-20 07:15:36.465573] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:01.963 [2024-11-20 07:15:36.465583] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:01.963 [2024-11-20 07:15:36.465588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:01.963 [2024-11-20 07:15:36.465595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:01.963 [2024-11-20 07:15:36.465704] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:01.963 [2024-11-20 07:15:36.465709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:01.963 [2024-11-20 07:15:36.465715] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:01.963 [2024-11-20 07:15:36.466579] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:01.963 [2024-11-20 07:15:36.467580] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:01.963 [2024-11-20 07:15:36.468589] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:01.963 [2024-11-20 07:15:36.469586] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.963 [2024-11-20 07:15:36.469640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:01.963 [2024-11-20 07:15:36.470598] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:01.963 [2024-11-20 07:15:36.470606] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:01.964 [2024-11-20 07:15:36.470614] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.470636] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:01.964 [2024-11-20 07:15:36.470647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.470663] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.964 [2024-11-20 07:15:36.470669] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.964 [2024-11-20 07:15:36.470672] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.964 [2024-11-20 07:15:36.470686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.964 [2024-11-20 07:15:36.470723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:01.964 [2024-11-20 07:15:36.470733] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:01.964 [2024-11-20 07:15:36.470739] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:01.964 [2024-11-20 07:15:36.470744] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:01.964 [2024-11-20 07:15:36.470749] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:01.964 [2024-11-20 07:15:36.470756] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:01.964 [2024-11-20 07:15:36.470761] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:01.964 [2024-11-20 07:15:36.470766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.470778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.470788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:01.964 [2024-11-20 07:15:36.470803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:01.964 [2024-11-20 07:15:36.470814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.964 [2024-11-20 07:15:36.470822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.964 [2024-11-20 07:15:36.470831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.964 [2024-11-20 07:15:36.470839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.964 [2024-11-20 07:15:36.470844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.470852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.470861] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:01.964 [2024-11-20 07:15:36.470875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:01.964 [2024-11-20 07:15:36.470883] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:01.964 [2024-11-20 07:15:36.470888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.470896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.470902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.470911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:01.964 [2024-11-20 07:15:36.470923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:01.964 [2024-11-20 07:15:36.470985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.470993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.471001] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:01.964 [2024-11-20 07:15:36.471005] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:01.964 [2024-11-20 07:15:36.471009] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.964 [2024-11-20 07:15:36.471015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:01.964 [2024-11-20 07:15:36.471029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:01.964 [2024-11-20 07:15:36.471039] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:01.964 [2024-11-20 07:15:36.471048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.471056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.471063] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.964 [2024-11-20 07:15:36.471067] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.964 [2024-11-20 07:15:36.471070] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.964 [2024-11-20 07:15:36.471077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.964 [2024-11-20 07:15:36.471091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:01.964 [2024-11-20 07:15:36.471103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.471111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.471118] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.964 [2024-11-20 07:15:36.471123] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.964 [2024-11-20 07:15:36.471128] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.964 [2024-11-20 07:15:36.471135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.964 [2024-11-20 07:15:36.471144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:01.964 [2024-11-20 07:15:36.471152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.471160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.471169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.471175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.471180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.471186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.471192] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:01.964 [2024-11-20 07:15:36.471197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:01.964 [2024-11-20 07:15:36.471202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:01.964 [2024-11-20 07:15:36.471219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:01.964 [2024-11-20 07:15:36.471229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:01.964 [2024-11-20 07:15:36.471241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:01.964 [2024-11-20 07:15:36.471251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:01.964 [2024-11-20 07:15:36.471262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:01.964 [2024-11-20 07:15:36.471272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:01.964 [2024-11-20 07:15:36.471283] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:01.964 [2024-11-20 07:15:36.471290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:01.964 [2024-11-20 07:15:36.471304] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:01.964 [2024-11-20 07:15:36.471309] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:01.964 [2024-11-20 07:15:36.471312] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:01.964 [2024-11-20 07:15:36.471316] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:01.964 [2024-11-20 07:15:36.471319] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:01.964 [2024-11-20 07:15:36.471326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:01.964 [2024-11-20 07:15:36.471335] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:01.964 [2024-11-20 07:15:36.471340] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:01.964 [2024-11-20 07:15:36.471343] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.964 [2024-11-20 07:15:36.471349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:01.965 [2024-11-20 07:15:36.471356] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:01.965 [2024-11-20 07:15:36.471361] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.965 [2024-11-20 07:15:36.471364] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.965 [2024-11-20 07:15:36.471370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.965 [2024-11-20 07:15:36.471378] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:01.965 [2024-11-20 07:15:36.471382] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:01.965 [2024-11-20 07:15:36.471385] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:01.965 [2024-11-20 07:15:36.471391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:01.965 [2024-11-20 07:15:36.471398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:01.965 [2024-11-20 07:15:36.471410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:01.965 [2024-11-20 07:15:36.471421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:01.965 [2024-11-20 07:15:36.471428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:01.965 ===================================================== 00:15:01.965 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:01.965 ===================================================== 00:15:01.965 Controller Capabilities/Features 00:15:01.965 ================================ 00:15:01.965 Vendor ID: 4e58 00:15:01.965 Subsystem Vendor ID: 4e58 00:15:01.965 Serial Number: SPDK1 00:15:01.965 Model Number: SPDK bdev Controller 00:15:01.965 Firmware Version: 25.01 00:15:01.965 Recommended Arb Burst: 6 00:15:01.965 IEEE OUI Identifier: 8d 6b 50 00:15:01.965 Multi-path I/O 00:15:01.965 May have multiple subsystem ports: Yes 00:15:01.965 May have multiple controllers: Yes 00:15:01.965 Associated with SR-IOV VF: No 00:15:01.965 Max Data Transfer Size: 131072 00:15:01.965 Max Number of Namespaces: 32 00:15:01.965 Max Number of I/O Queues: 127 00:15:01.965 NVMe Specification Version (VS): 1.3 00:15:01.965 NVMe Specification Version (Identify): 1.3 00:15:01.965 Maximum Queue Entries: 256 00:15:01.965 Contiguous Queues Required: Yes 00:15:01.965 Arbitration Mechanisms Supported 00:15:01.965 Weighted Round Robin: Not Supported 00:15:01.965 Vendor Specific: Not Supported 00:15:01.965 Reset Timeout: 15000 ms 00:15:01.965 Doorbell Stride: 4 bytes 00:15:01.965 NVM Subsystem Reset: Not Supported 00:15:01.965 Command Sets Supported 00:15:01.965 NVM Command Set: Supported 00:15:01.965 Boot Partition: Not Supported 00:15:01.965 Memory Page Size Minimum: 4096 bytes 00:15:01.965 Memory Page Size Maximum: 4096 bytes 00:15:01.965 Persistent Memory Region: Not Supported 00:15:01.965 Optional Asynchronous Events Supported 00:15:01.965 Namespace Attribute Notices: Supported 00:15:01.965 Firmware Activation Notices: Not Supported 00:15:01.965 ANA Change Notices: Not Supported 00:15:01.965 PLE Aggregate Log Change Notices: Not Supported 00:15:01.965 LBA Status Info Alert Notices: Not Supported 00:15:01.965 EGE Aggregate Log Change Notices: Not Supported 00:15:01.965 Normal NVM Subsystem Shutdown event: Not Supported 00:15:01.965 Zone Descriptor Change Notices: Not Supported 00:15:01.965 Discovery Log Change Notices: Not Supported 00:15:01.965 Controller Attributes 00:15:01.965 128-bit Host Identifier: Supported 00:15:01.965 Non-Operational Permissive Mode: Not Supported 00:15:01.965 NVM Sets: Not Supported 00:15:01.965 Read Recovery Levels: Not Supported 00:15:01.965 Endurance Groups: Not Supported 00:15:01.965 Predictable Latency Mode: Not Supported 00:15:01.965 Traffic Based Keep ALive: Not Supported 00:15:01.965 Namespace Granularity: Not Supported 00:15:01.965 SQ Associations: Not Supported 00:15:01.965 UUID List: Not Supported 00:15:01.965 Multi-Domain Subsystem: Not Supported 00:15:01.965 Fixed Capacity Management: Not Supported 00:15:01.965 Variable Capacity Management: Not Supported 00:15:01.965 Delete Endurance Group: Not Supported 00:15:01.965 Delete NVM Set: Not Supported 00:15:01.965 Extended LBA Formats Supported: Not Supported 00:15:01.965 Flexible Data Placement Supported: Not Supported 00:15:01.965 00:15:01.965 Controller Memory Buffer Support 00:15:01.965 ================================ 00:15:01.965 Supported: No 00:15:01.965 00:15:01.965 Persistent Memory Region Support 00:15:01.965 ================================ 00:15:01.965 Supported: No 00:15:01.965 00:15:01.965 Admin Command Set Attributes 00:15:01.965 ============================ 00:15:01.965 Security Send/Receive: Not Supported 00:15:01.965 Format NVM: Not Supported 00:15:01.965 Firmware Activate/Download: Not Supported 00:15:01.965 Namespace Management: Not Supported 00:15:01.965 Device Self-Test: Not Supported 00:15:01.965 Directives: Not Supported 00:15:01.965 NVMe-MI: Not Supported 00:15:01.965 Virtualization Management: Not Supported 00:15:01.965 Doorbell Buffer Config: Not Supported 00:15:01.965 Get LBA Status Capability: Not Supported 00:15:01.965 Command & Feature Lockdown Capability: Not Supported 00:15:01.965 Abort Command Limit: 4 00:15:01.965 Async Event Request Limit: 4 00:15:01.965 Number of Firmware Slots: N/A 00:15:01.965 Firmware Slot 1 Read-Only: N/A 00:15:01.965 Firmware Activation Without Reset: N/A 00:15:01.965 Multiple Update Detection Support: N/A 00:15:01.965 Firmware Update Granularity: No Information Provided 00:15:01.965 Per-Namespace SMART Log: No 00:15:01.965 Asymmetric Namespace Access Log Page: Not Supported 00:15:01.965 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:01.965 Command Effects Log Page: Supported 00:15:01.965 Get Log Page Extended Data: Supported 00:15:01.965 Telemetry Log Pages: Not Supported 00:15:01.965 Persistent Event Log Pages: Not Supported 00:15:01.965 Supported Log Pages Log Page: May Support 00:15:01.965 Commands Supported & Effects Log Page: Not Supported 00:15:01.965 Feature Identifiers & Effects Log Page:May Support 00:15:01.965 NVMe-MI Commands & Effects Log Page: May Support 00:15:01.965 Data Area 4 for Telemetry Log: Not Supported 00:15:01.965 Error Log Page Entries Supported: 128 00:15:01.965 Keep Alive: Supported 00:15:01.965 Keep Alive Granularity: 10000 ms 00:15:01.965 00:15:01.965 NVM Command Set Attributes 00:15:01.965 ========================== 00:15:01.965 Submission Queue Entry Size 00:15:01.965 Max: 64 00:15:01.965 Min: 64 00:15:01.965 Completion Queue Entry Size 00:15:01.965 Max: 16 00:15:01.965 Min: 16 00:15:01.965 Number of Namespaces: 32 00:15:01.965 Compare Command: Supported 00:15:01.965 Write Uncorrectable Command: Not Supported 00:15:01.965 Dataset Management Command: Supported 00:15:01.965 Write Zeroes Command: Supported 00:15:01.965 Set Features Save Field: Not Supported 00:15:01.965 Reservations: Not Supported 00:15:01.965 Timestamp: Not Supported 00:15:01.965 Copy: Supported 00:15:01.965 Volatile Write Cache: Present 00:15:01.965 Atomic Write Unit (Normal): 1 00:15:01.965 Atomic Write Unit (PFail): 1 00:15:01.965 Atomic Compare & Write Unit: 1 00:15:01.965 Fused Compare & Write: Supported 00:15:01.965 Scatter-Gather List 00:15:01.965 SGL Command Set: Supported (Dword aligned) 00:15:01.965 SGL Keyed: Not Supported 00:15:01.965 SGL Bit Bucket Descriptor: Not Supported 00:15:01.965 SGL Metadata Pointer: Not Supported 00:15:01.965 Oversized SGL: Not Supported 00:15:01.965 SGL Metadata Address: Not Supported 00:15:01.965 SGL Offset: Not Supported 00:15:01.965 Transport SGL Data Block: Not Supported 00:15:01.965 Replay Protected Memory Block: Not Supported 00:15:01.965 00:15:01.965 Firmware Slot Information 00:15:01.965 ========================= 00:15:01.965 Active slot: 1 00:15:01.965 Slot 1 Firmware Revision: 25.01 00:15:01.965 00:15:01.965 00:15:01.965 Commands Supported and Effects 00:15:01.965 ============================== 00:15:01.965 Admin Commands 00:15:01.965 -------------- 00:15:01.965 Get Log Page (02h): Supported 00:15:01.965 Identify (06h): Supported 00:15:01.965 Abort (08h): Supported 00:15:01.965 Set Features (09h): Supported 00:15:01.965 Get Features (0Ah): Supported 00:15:01.965 Asynchronous Event Request (0Ch): Supported 00:15:01.965 Keep Alive (18h): Supported 00:15:01.965 I/O Commands 00:15:01.965 ------------ 00:15:01.965 Flush (00h): Supported LBA-Change 00:15:01.965 Write (01h): Supported LBA-Change 00:15:01.965 Read (02h): Supported 00:15:01.965 Compare (05h): Supported 00:15:01.965 Write Zeroes (08h): Supported LBA-Change 00:15:01.965 Dataset Management (09h): Supported LBA-Change 00:15:01.965 Copy (19h): Supported LBA-Change 00:15:01.965 00:15:01.965 Error Log 00:15:01.965 ========= 00:15:01.966 00:15:01.966 Arbitration 00:15:01.966 =========== 00:15:01.966 Arbitration Burst: 1 00:15:01.966 00:15:01.966 Power Management 00:15:01.966 ================ 00:15:01.966 Number of Power States: 1 00:15:01.966 Current Power State: Power State #0 00:15:01.966 Power State #0: 00:15:01.966 Max Power: 0.00 W 00:15:01.966 Non-Operational State: Operational 00:15:01.966 Entry Latency: Not Reported 00:15:01.966 Exit Latency: Not Reported 00:15:01.966 Relative Read Throughput: 0 00:15:01.966 Relative Read Latency: 0 00:15:01.966 Relative Write Throughput: 0 00:15:01.966 Relative Write Latency: 0 00:15:01.966 Idle Power: Not Reported 00:15:01.966 Active Power: Not Reported 00:15:01.966 Non-Operational Permissive Mode: Not Supported 00:15:01.966 00:15:01.966 Health Information 00:15:01.966 ================== 00:15:01.966 Critical Warnings: 00:15:01.966 Available Spare Space: OK 00:15:01.966 Temperature: OK 00:15:01.966 Device Reliability: OK 00:15:01.966 Read Only: No 00:15:01.966 Volatile Memory Backup: OK 00:15:01.966 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:01.966 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:01.966 Available Spare: 0% 00:15:01.966 Available Sp[2024-11-20 07:15:36.471532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:01.966 [2024-11-20 07:15:36.471541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:01.966 [2024-11-20 07:15:36.471570] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:01.966 [2024-11-20 07:15:36.471580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.966 [2024-11-20 07:15:36.471587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.966 [2024-11-20 07:15:36.471593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.966 [2024-11-20 07:15:36.471599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.966 [2024-11-20 07:15:36.472615] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:01.966 [2024-11-20 07:15:36.472626] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:01.966 [2024-11-20 07:15:36.473610] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.966 [2024-11-20 07:15:36.473653] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:01.966 [2024-11-20 07:15:36.473660] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:01.966 [2024-11-20 07:15:36.474619] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:01.966 [2024-11-20 07:15:36.474630] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:01.966 [2024-11-20 07:15:36.474693] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:01.966 [2024-11-20 07:15:36.479873] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:01.966 are Threshold: 0% 00:15:01.966 Life Percentage Used: 0% 00:15:01.966 Data Units Read: 0 00:15:01.966 Data Units Written: 0 00:15:01.966 Host Read Commands: 0 00:15:01.966 Host Write Commands: 0 00:15:01.966 Controller Busy Time: 0 minutes 00:15:01.966 Power Cycles: 0 00:15:01.966 Power On Hours: 0 hours 00:15:01.966 Unsafe Shutdowns: 0 00:15:01.966 Unrecoverable Media Errors: 0 00:15:01.966 Lifetime Error Log Entries: 0 00:15:01.966 Warning Temperature Time: 0 minutes 00:15:01.966 Critical Temperature Time: 0 minutes 00:15:01.966 00:15:01.966 Number of Queues 00:15:01.966 ================ 00:15:01.966 Number of I/O Submission Queues: 127 00:15:01.966 Number of I/O Completion Queues: 127 00:15:01.966 00:15:01.966 Active Namespaces 00:15:01.966 ================= 00:15:01.966 Namespace ID:1 00:15:01.966 Error Recovery Timeout: Unlimited 00:15:01.966 Command Set Identifier: NVM (00h) 00:15:01.966 Deallocate: Supported 00:15:01.966 Deallocated/Unwritten Error: Not Supported 00:15:01.966 Deallocated Read Value: Unknown 00:15:01.966 Deallocate in Write Zeroes: Not Supported 00:15:01.966 Deallocated Guard Field: 0xFFFF 00:15:01.966 Flush: Supported 00:15:01.966 Reservation: Supported 00:15:01.966 Namespace Sharing Capabilities: Multiple Controllers 00:15:01.966 Size (in LBAs): 131072 (0GiB) 00:15:01.966 Capacity (in LBAs): 131072 (0GiB) 00:15:01.966 Utilization (in LBAs): 131072 (0GiB) 00:15:01.966 NGUID: 27681B81178A4E69B2BE0C332743942D 00:15:01.966 UUID: 27681b81-178a-4e69-b2be-0c332743942d 00:15:01.966 Thin Provisioning: Not Supported 00:15:01.966 Per-NS Atomic Units: Yes 00:15:01.966 Atomic Boundary Size (Normal): 0 00:15:01.966 Atomic Boundary Size (PFail): 0 00:15:01.966 Atomic Boundary Offset: 0 00:15:01.966 Maximum Single Source Range Length: 65535 00:15:01.966 Maximum Copy Length: 65535 00:15:01.966 Maximum Source Range Count: 1 00:15:01.966 NGUID/EUI64 Never Reused: No 00:15:01.966 Namespace Write Protected: No 00:15:01.966 Number of LBA Formats: 1 00:15:01.966 Current LBA Format: LBA Format #00 00:15:01.966 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:01.966 00:15:01.966 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:01.966 [2024-11-20 07:15:36.684557] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.260 Initializing NVMe Controllers 00:15:07.260 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:07.260 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:07.260 Initialization complete. Launching workers. 00:15:07.260 ======================================================== 00:15:07.260 Latency(us) 00:15:07.260 Device Information : IOPS MiB/s Average min max 00:15:07.260 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40022.94 156.34 3198.01 836.59 10789.39 00:15:07.260 ======================================================== 00:15:07.260 Total : 40022.94 156.34 3198.01 836.59 10789.39 00:15:07.260 00:15:07.260 [2024-11-20 07:15:41.703778] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.260 07:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:07.260 [2024-11-20 07:15:41.894647] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.551 Initializing NVMe Controllers 00:15:12.551 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:12.551 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:12.551 Initialization complete. Launching workers. 00:15:12.551 ======================================================== 00:15:12.551 Latency(us) 00:15:12.551 Device Information : IOPS MiB/s Average min max 00:15:12.551 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.73 7638.04 8069.47 00:15:12.551 ======================================================== 00:15:12.551 Total : 16051.20 62.70 7980.73 7638.04 8069.47 00:15:12.551 00:15:12.551 [2024-11-20 07:15:46.928993] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.551 07:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:12.551 [2024-11-20 07:15:47.145910] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.842 [2024-11-20 07:15:52.212074] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.842 Initializing NVMe Controllers 00:15:17.842 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:17.842 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:17.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:17.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:17.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:17.842 Initialization complete. Launching workers. 00:15:17.842 Starting thread on core 2 00:15:17.842 Starting thread on core 3 00:15:17.842 Starting thread on core 1 00:15:17.842 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:17.842 [2024-11-20 07:15:52.506251] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.051 [2024-11-20 07:15:56.092041] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:22.051 Initializing NVMe Controllers 00:15:22.051 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:22.051 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:22.051 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:22.051 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:22.051 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:22.051 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:22.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:22.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:22.051 Initialization complete. Launching workers. 00:15:22.051 Starting thread on core 1 with urgent priority queue 00:15:22.051 Starting thread on core 2 with urgent priority queue 00:15:22.051 Starting thread on core 3 with urgent priority queue 00:15:22.051 Starting thread on core 0 with urgent priority queue 00:15:22.051 SPDK bdev Controller (SPDK1 ) core 0: 13194.33 IO/s 7.58 secs/100000 ios 00:15:22.051 SPDK bdev Controller (SPDK1 ) core 1: 14098.67 IO/s 7.09 secs/100000 ios 00:15:22.051 SPDK bdev Controller (SPDK1 ) core 2: 10013.67 IO/s 9.99 secs/100000 ios 00:15:22.051 SPDK bdev Controller (SPDK1 ) core 3: 12295.33 IO/s 8.13 secs/100000 ios 00:15:22.051 ======================================================== 00:15:22.051 00:15:22.051 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:22.051 [2024-11-20 07:15:56.388974] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.051 Initializing NVMe Controllers 00:15:22.051 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:22.051 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:22.051 Namespace ID: 1 size: 0GB 00:15:22.051 Initialization complete. 00:15:22.051 INFO: using host memory buffer for IO 00:15:22.051 Hello world! 00:15:22.051 [2024-11-20 07:15:56.424184] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:22.051 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:22.051 [2024-11-20 07:15:56.720287] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.996 Initializing NVMe Controllers 00:15:22.996 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:22.996 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:22.996 Initialization complete. Launching workers. 00:15:22.996 submit (in ns) avg, min, max = 7155.6, 3902.5, 4000484.2 00:15:22.996 complete (in ns) avg, min, max = 19020.5, 2374.2, 3999789.2 00:15:22.996 00:15:22.996 Submit histogram 00:15:22.996 ================ 00:15:22.996 Range in us Cumulative Count 00:15:22.996 3.893 - 3.920: 0.3926% ( 75) 00:15:22.996 3.920 - 3.947: 4.3235% ( 751) 00:15:22.996 3.947 - 3.973: 13.2374% ( 1703) 00:15:22.996 3.973 - 4.000: 24.6480% ( 2180) 00:15:22.996 4.000 - 4.027: 36.6187% ( 2287) 00:15:22.996 4.027 - 4.053: 50.1963% ( 2594) 00:15:22.996 4.053 - 4.080: 68.2282% ( 3445) 00:15:22.996 4.080 - 4.107: 82.7270% ( 2770) 00:15:22.996 4.107 - 4.133: 91.4525% ( 1667) 00:15:22.996 4.133 - 4.160: 96.2104% ( 909) 00:15:22.996 4.160 - 4.187: 98.4402% ( 426) 00:15:22.996 4.187 - 4.213: 99.1049% ( 127) 00:15:22.996 4.213 - 4.240: 99.4138% ( 59) 00:15:22.996 4.240 - 4.267: 99.4923% ( 15) 00:15:22.996 4.267 - 4.293: 99.5080% ( 3) 00:15:22.996 4.427 - 4.453: 99.5132% ( 1) 00:15:22.996 4.480 - 4.507: 99.5185% ( 1) 00:15:22.996 4.853 - 4.880: 99.5237% ( 1) 00:15:22.996 4.960 - 4.987: 99.5289% ( 1) 00:15:22.996 5.040 - 5.067: 99.5342% ( 1) 00:15:22.996 5.200 - 5.227: 99.5446% ( 2) 00:15:22.996 5.547 - 5.573: 99.5499% ( 1) 00:15:22.996 5.733 - 5.760: 99.5551% ( 1) 00:15:22.996 5.787 - 5.813: 99.5603% ( 1) 00:15:22.996 5.840 - 5.867: 99.5656% ( 1) 00:15:22.996 5.920 - 5.947: 99.5708% ( 1) 00:15:22.996 5.973 - 6.000: 99.5813% ( 2) 00:15:22.996 6.027 - 6.053: 99.5865% ( 1) 00:15:22.996 6.080 - 6.107: 99.5917% ( 1) 00:15:22.996 6.133 - 6.160: 99.5970% ( 1) 00:15:22.996 6.160 - 6.187: 99.6074% ( 2) 00:15:22.996 6.187 - 6.213: 99.6127% ( 1) 00:15:22.996 6.213 - 6.240: 99.6231% ( 2) 00:15:22.996 6.267 - 6.293: 99.6336% ( 2) 00:15:22.996 6.293 - 6.320: 99.6388% ( 1) 00:15:22.996 6.320 - 6.347: 99.6441% ( 1) 00:15:22.996 6.373 - 6.400: 99.6545% ( 2) 00:15:22.996 6.400 - 6.427: 99.6598% ( 1) 00:15:22.996 6.453 - 6.480: 99.6650% ( 1) 00:15:22.996 6.480 - 6.507: 99.6755% ( 2) 00:15:22.996 6.507 - 6.533: 99.6807% ( 1) 00:15:22.996 6.720 - 6.747: 99.6859% ( 1) 00:15:22.996 6.747 - 6.773: 99.6912% ( 1) 00:15:22.996 6.880 - 6.933: 99.7016% ( 2) 00:15:22.996 6.933 - 6.987: 99.7121% ( 2) 00:15:22.996 6.987 - 7.040: 99.7174% ( 1) 00:15:22.996 7.040 - 7.093: 99.7278% ( 2) 00:15:22.996 7.093 - 7.147: 99.7383% ( 2) 00:15:22.996 7.147 - 7.200: 99.7435% ( 1) 00:15:22.996 7.200 - 7.253: 99.7488% ( 1) 00:15:22.996 7.253 - 7.307: 99.7645% ( 3) 00:15:22.996 7.307 - 7.360: 99.7749% ( 2) 00:15:22.996 7.360 - 7.413: 99.7802% ( 1) 00:15:22.996 7.413 - 7.467: 99.7959% ( 3) 00:15:22.996 7.467 - 7.520: 99.8063% ( 2) 00:15:22.996 7.520 - 7.573: 99.8168% ( 2) 00:15:22.996 7.573 - 7.627: 99.8377% ( 4) 00:15:22.996 7.627 - 7.680: 99.8430% ( 1) 00:15:22.996 7.680 - 7.733: 99.8587% ( 3) 00:15:22.996 7.733 - 7.787: 99.8639% ( 1) 00:15:22.996 7.787 - 7.840: 99.8744% ( 2) 00:15:22.996 7.947 - 8.000: 99.8796% ( 1) 00:15:22.996 8.107 - 8.160: 99.8848% ( 1) 00:15:22.996 8.267 - 8.320: 99.8901% ( 1) 00:15:22.996 8.587 - 8.640: 99.8953% ( 1) 00:15:22.996 8.747 - 8.800: 99.9005% ( 1) 00:15:22.996 9.013 - 9.067: 99.9058% ( 1) 00:15:22.996 9.547 - 9.600: 99.9110% ( 1) 00:15:22.996 14.613 - 14.720: 99.9163% ( 1) 00:15:22.996 45.867 - 46.080: 99.9215% ( 1) 00:15:22.996 3167.573 - 3181.227: 99.9267% ( 1) 00:15:22.996 3986.773 - 4014.080: 100.0000% ( 14) 00:15:22.996 00:15:22.996 Complete histogram 00:15:22.996 ================== 00:15:22.996 Range in us Cumulative Count 00:15:22.996 2.373 - 2.387: 0.0105% ( 2) 00:15:22.996 2.387 - 2.400: 0.2408% ( 44) 00:15:22.996 2.400 - 2.413: 0.9474% ( 135) 00:15:22.996 2.413 - 2.427: 1.0154% ( 13) 00:15:22.996 2.427 - 2.440: 1.1620% ( 28) 00:15:22.996 2.440 - [2024-11-20 07:15:57.742786] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:23.257 2.453: 1.2300% ( 13) 00:15:23.257 2.453 - 2.467: 4.8835% ( 698) 00:15:23.257 2.467 - 2.480: 47.1552% ( 8076) 00:15:23.257 2.480 - 2.493: 59.2410% ( 2309) 00:15:23.257 2.493 - 2.507: 71.4263% ( 2328) 00:15:23.257 2.507 - 2.520: 77.6969% ( 1198) 00:15:23.257 2.520 - 2.533: 80.8061% ( 594) 00:15:23.257 2.533 - 2.547: 85.3232% ( 863) 00:15:23.257 2.547 - 2.560: 91.6305% ( 1205) 00:15:23.257 2.560 - 2.573: 95.6975% ( 777) 00:15:23.257 2.573 - 2.587: 97.7336% ( 389) 00:15:23.257 2.587 - 2.600: 98.8642% ( 216) 00:15:23.257 2.600 - 2.613: 99.2672% ( 77) 00:15:23.257 2.613 - 2.627: 99.3824% ( 22) 00:15:23.257 2.627 - 2.640: 99.3981% ( 3) 00:15:23.257 2.653 - 2.667: 99.4033% ( 1) 00:15:23.257 2.667 - 2.680: 99.4085% ( 1) 00:15:23.257 4.240 - 4.267: 99.4138% ( 1) 00:15:23.257 4.480 - 4.507: 99.4190% ( 1) 00:15:23.257 4.640 - 4.667: 99.4242% ( 1) 00:15:23.257 4.667 - 4.693: 99.4295% ( 1) 00:15:23.257 4.773 - 4.800: 99.4347% ( 1) 00:15:23.257 4.880 - 4.907: 99.4399% ( 1) 00:15:23.257 4.907 - 4.933: 99.4452% ( 1) 00:15:23.257 4.987 - 5.013: 99.4504% ( 1) 00:15:23.257 5.040 - 5.067: 99.4556% ( 1) 00:15:23.257 5.067 - 5.093: 99.4713% ( 3) 00:15:23.257 5.093 - 5.120: 99.4818% ( 2) 00:15:23.257 5.120 - 5.147: 99.4870% ( 1) 00:15:23.257 5.227 - 5.253: 99.4923% ( 1) 00:15:23.257 5.333 - 5.360: 99.4975% ( 1) 00:15:23.257 5.440 - 5.467: 99.5132% ( 3) 00:15:23.257 5.573 - 5.600: 99.5185% ( 1) 00:15:23.257 5.627 - 5.653: 99.5289% ( 2) 00:15:23.257 5.680 - 5.707: 99.5342% ( 1) 00:15:23.257 5.893 - 5.920: 99.5394% ( 1) 00:15:23.257 5.947 - 5.973: 99.5499% ( 2) 00:15:23.257 6.107 - 6.133: 99.5551% ( 1) 00:15:23.257 6.187 - 6.213: 99.5656% ( 2) 00:15:23.257 6.320 - 6.347: 99.5708% ( 1) 00:15:23.257 7.093 - 7.147: 99.5760% ( 1) 00:15:23.257 10.880 - 10.933: 99.5813% ( 1) 00:15:23.257 14.080 - 14.187: 99.5865% ( 1) 00:15:23.257 3986.773 - 4014.080: 100.0000% ( 79) 00:15:23.257 00:15:23.257 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:23.257 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:23.257 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:23.257 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:23.258 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:23.258 [ 00:15:23.258 { 00:15:23.258 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:23.258 "subtype": "Discovery", 00:15:23.258 "listen_addresses": [], 00:15:23.258 "allow_any_host": true, 00:15:23.258 "hosts": [] 00:15:23.258 }, 00:15:23.258 { 00:15:23.258 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:23.258 "subtype": "NVMe", 00:15:23.258 "listen_addresses": [ 00:15:23.258 { 00:15:23.258 "trtype": "VFIOUSER", 00:15:23.258 "adrfam": "IPv4", 00:15:23.258 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:23.258 "trsvcid": "0" 00:15:23.258 } 00:15:23.258 ], 00:15:23.258 "allow_any_host": true, 00:15:23.258 "hosts": [], 00:15:23.258 "serial_number": "SPDK1", 00:15:23.258 "model_number": "SPDK bdev Controller", 00:15:23.258 "max_namespaces": 32, 00:15:23.258 "min_cntlid": 1, 00:15:23.258 "max_cntlid": 65519, 00:15:23.258 "namespaces": [ 00:15:23.258 { 00:15:23.258 "nsid": 1, 00:15:23.258 "bdev_name": "Malloc1", 00:15:23.258 "name": "Malloc1", 00:15:23.258 "nguid": "27681B81178A4E69B2BE0C332743942D", 00:15:23.258 "uuid": "27681b81-178a-4e69-b2be-0c332743942d" 00:15:23.258 } 00:15:23.258 ] 00:15:23.258 }, 00:15:23.258 { 00:15:23.258 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:23.258 "subtype": "NVMe", 00:15:23.258 "listen_addresses": [ 00:15:23.258 { 00:15:23.258 "trtype": "VFIOUSER", 00:15:23.258 "adrfam": "IPv4", 00:15:23.258 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:23.258 "trsvcid": "0" 00:15:23.258 } 00:15:23.258 ], 00:15:23.258 "allow_any_host": true, 00:15:23.258 "hosts": [], 00:15:23.258 "serial_number": "SPDK2", 00:15:23.258 "model_number": "SPDK bdev Controller", 00:15:23.258 "max_namespaces": 32, 00:15:23.258 "min_cntlid": 1, 00:15:23.258 "max_cntlid": 65519, 00:15:23.258 "namespaces": [ 00:15:23.258 { 00:15:23.258 "nsid": 1, 00:15:23.258 "bdev_name": "Malloc2", 00:15:23.258 "name": "Malloc2", 00:15:23.258 "nguid": "0BC7A7E2F0B4448E9FC06E7E636E38C3", 00:15:23.258 "uuid": "0bc7a7e2-f0b4-448e-9fc0-6e7e636e38c3" 00:15:23.258 } 00:15:23.258 ] 00:15:23.258 } 00:15:23.258 ] 00:15:23.258 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:23.258 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1236522 00:15:23.258 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:23.258 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:23.258 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:23.258 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:23.258 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:23.258 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:23.258 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:23.258 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:23.518 Malloc3 00:15:23.518 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:23.518 [2024-11-20 07:15:58.184324] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:23.780 [2024-11-20 07:15:58.337290] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:23.780 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:23.780 Asynchronous Event Request test 00:15:23.780 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.780 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.780 Registering asynchronous event callbacks... 00:15:23.780 Starting namespace attribute notice tests for all controllers... 00:15:23.780 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:23.780 aer_cb - Changed Namespace 00:15:23.780 Cleaning up... 00:15:23.780 [ 00:15:23.780 { 00:15:23.780 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:23.780 "subtype": "Discovery", 00:15:23.780 "listen_addresses": [], 00:15:23.780 "allow_any_host": true, 00:15:23.780 "hosts": [] 00:15:23.780 }, 00:15:23.780 { 00:15:23.780 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:23.780 "subtype": "NVMe", 00:15:23.780 "listen_addresses": [ 00:15:23.780 { 00:15:23.780 "trtype": "VFIOUSER", 00:15:23.780 "adrfam": "IPv4", 00:15:23.780 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:23.780 "trsvcid": "0" 00:15:23.780 } 00:15:23.780 ], 00:15:23.780 "allow_any_host": true, 00:15:23.780 "hosts": [], 00:15:23.780 "serial_number": "SPDK1", 00:15:23.780 "model_number": "SPDK bdev Controller", 00:15:23.780 "max_namespaces": 32, 00:15:23.780 "min_cntlid": 1, 00:15:23.780 "max_cntlid": 65519, 00:15:23.780 "namespaces": [ 00:15:23.780 { 00:15:23.780 "nsid": 1, 00:15:23.780 "bdev_name": "Malloc1", 00:15:23.780 "name": "Malloc1", 00:15:23.780 "nguid": "27681B81178A4E69B2BE0C332743942D", 00:15:23.780 "uuid": "27681b81-178a-4e69-b2be-0c332743942d" 00:15:23.780 }, 00:15:23.780 { 00:15:23.780 "nsid": 2, 00:15:23.780 "bdev_name": "Malloc3", 00:15:23.780 "name": "Malloc3", 00:15:23.780 "nguid": "8E5872E0887C4500B5E9CBF2D980D3CA", 00:15:23.780 "uuid": "8e5872e0-887c-4500-b5e9-cbf2d980d3ca" 00:15:23.780 } 00:15:23.780 ] 00:15:23.780 }, 00:15:23.780 { 00:15:23.780 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:23.780 "subtype": "NVMe", 00:15:23.780 "listen_addresses": [ 00:15:23.780 { 00:15:23.780 "trtype": "VFIOUSER", 00:15:23.780 "adrfam": "IPv4", 00:15:23.780 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:23.780 "trsvcid": "0" 00:15:23.780 } 00:15:23.780 ], 00:15:23.780 "allow_any_host": true, 00:15:23.780 "hosts": [], 00:15:23.780 "serial_number": "SPDK2", 00:15:23.780 "model_number": "SPDK bdev Controller", 00:15:23.780 "max_namespaces": 32, 00:15:23.780 "min_cntlid": 1, 00:15:23.780 "max_cntlid": 65519, 00:15:23.780 "namespaces": [ 00:15:23.780 { 00:15:23.780 "nsid": 1, 00:15:23.780 "bdev_name": "Malloc2", 00:15:23.781 "name": "Malloc2", 00:15:23.781 "nguid": "0BC7A7E2F0B4448E9FC06E7E636E38C3", 00:15:23.781 "uuid": "0bc7a7e2-f0b4-448e-9fc0-6e7e636e38c3" 00:15:23.781 } 00:15:23.781 ] 00:15:23.781 } 00:15:23.781 ] 00:15:23.781 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1236522 00:15:23.781 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:23.781 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:23.781 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:23.781 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:24.043 [2024-11-20 07:15:58.562551] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:15:24.043 [2024-11-20 07:15:58.562595] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236538 ] 00:15:24.043 [2024-11-20 07:15:58.617921] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:24.043 [2024-11-20 07:15:58.626113] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:24.043 [2024-11-20 07:15:58.626137] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0b1f038000 00:15:24.043 [2024-11-20 07:15:58.627113] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:24.043 [2024-11-20 07:15:58.628115] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:24.043 [2024-11-20 07:15:58.629121] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:24.043 [2024-11-20 07:15:58.630128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:24.043 [2024-11-20 07:15:58.631137] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:24.043 [2024-11-20 07:15:58.632141] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:24.043 [2024-11-20 07:15:58.633145] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:24.043 [2024-11-20 07:15:58.634158] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:24.043 [2024-11-20 07:15:58.635164] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:24.043 [2024-11-20 07:15:58.635175] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0b1f02d000 00:15:24.043 [2024-11-20 07:15:58.636594] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:24.043 [2024-11-20 07:15:58.654463] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:24.043 [2024-11-20 07:15:58.654489] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:24.043 [2024-11-20 07:15:58.659566] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:24.043 [2024-11-20 07:15:58.659615] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:24.043 [2024-11-20 07:15:58.659699] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:24.044 [2024-11-20 07:15:58.659712] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:24.044 [2024-11-20 07:15:58.659717] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:24.044 [2024-11-20 07:15:58.660576] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:24.044 [2024-11-20 07:15:58.660586] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:24.044 [2024-11-20 07:15:58.660593] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:24.044 [2024-11-20 07:15:58.661582] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:24.044 [2024-11-20 07:15:58.661591] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:24.044 [2024-11-20 07:15:58.661599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:24.044 [2024-11-20 07:15:58.662591] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:24.044 [2024-11-20 07:15:58.662601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:24.044 [2024-11-20 07:15:58.663601] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:24.044 [2024-11-20 07:15:58.663610] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:24.044 [2024-11-20 07:15:58.663618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:24.044 [2024-11-20 07:15:58.663625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:24.044 [2024-11-20 07:15:58.663734] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:24.044 [2024-11-20 07:15:58.663739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:24.044 [2024-11-20 07:15:58.663744] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:24.044 [2024-11-20 07:15:58.664616] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:24.044 [2024-11-20 07:15:58.665624] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:24.044 [2024-11-20 07:15:58.666630] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:24.044 [2024-11-20 07:15:58.667636] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.044 [2024-11-20 07:15:58.667674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:24.044 [2024-11-20 07:15:58.668642] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:24.044 [2024-11-20 07:15:58.668652] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:24.044 [2024-11-20 07:15:58.668657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.668678] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:24.044 [2024-11-20 07:15:58.668686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.668699] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:24.044 [2024-11-20 07:15:58.668704] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:24.044 [2024-11-20 07:15:58.668708] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.044 [2024-11-20 07:15:58.668719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:24.044 [2024-11-20 07:15:58.674869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:24.044 [2024-11-20 07:15:58.674881] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:24.044 [2024-11-20 07:15:58.674886] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:24.044 [2024-11-20 07:15:58.674891] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:24.044 [2024-11-20 07:15:58.674895] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:24.044 [2024-11-20 07:15:58.674903] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:24.044 [2024-11-20 07:15:58.674910] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:24.044 [2024-11-20 07:15:58.674915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.674924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.674935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:24.044 [2024-11-20 07:15:58.682869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:24.044 [2024-11-20 07:15:58.682882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.044 [2024-11-20 07:15:58.682891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.044 [2024-11-20 07:15:58.682899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.044 [2024-11-20 07:15:58.682908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.044 [2024-11-20 07:15:58.682912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.682920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.682929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:24.044 [2024-11-20 07:15:58.690867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:24.044 [2024-11-20 07:15:58.690878] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:24.044 [2024-11-20 07:15:58.690883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.690890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.690896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.690905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:24.044 [2024-11-20 07:15:58.698866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:24.044 [2024-11-20 07:15:58.698932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.698940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.698948] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:24.044 [2024-11-20 07:15:58.698953] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:24.044 [2024-11-20 07:15:58.698957] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.044 [2024-11-20 07:15:58.698963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:24.044 [2024-11-20 07:15:58.706868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:24.044 [2024-11-20 07:15:58.706880] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:24.044 [2024-11-20 07:15:58.706893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.706901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.706908] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:24.044 [2024-11-20 07:15:58.706913] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:24.044 [2024-11-20 07:15:58.706916] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.044 [2024-11-20 07:15:58.706922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:24.044 [2024-11-20 07:15:58.714868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:24.044 [2024-11-20 07:15:58.714882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.714890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:24.044 [2024-11-20 07:15:58.714898] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:24.044 [2024-11-20 07:15:58.714902] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:24.044 [2024-11-20 07:15:58.714906] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.044 [2024-11-20 07:15:58.714912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:24.044 [2024-11-20 07:15:58.722869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:24.044 [2024-11-20 07:15:58.722878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:24.045 [2024-11-20 07:15:58.722885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:24.045 [2024-11-20 07:15:58.722894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:24.045 [2024-11-20 07:15:58.722900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:24.045 [2024-11-20 07:15:58.722905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:24.045 [2024-11-20 07:15:58.722910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:24.045 [2024-11-20 07:15:58.722915] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:24.045 [2024-11-20 07:15:58.722920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:24.045 [2024-11-20 07:15:58.722925] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:24.045 [2024-11-20 07:15:58.722943] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:24.045 [2024-11-20 07:15:58.730867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:24.045 [2024-11-20 07:15:58.730881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:24.045 [2024-11-20 07:15:58.738868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:24.045 [2024-11-20 07:15:58.738881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:24.045 [2024-11-20 07:15:58.746867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:24.045 [2024-11-20 07:15:58.746887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:24.045 [2024-11-20 07:15:58.754866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:24.045 [2024-11-20 07:15:58.754882] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:24.045 [2024-11-20 07:15:58.754886] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:24.045 [2024-11-20 07:15:58.754890] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:24.045 [2024-11-20 07:15:58.754894] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:24.045 [2024-11-20 07:15:58.754897] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:24.045 [2024-11-20 07:15:58.754903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:24.045 [2024-11-20 07:15:58.754911] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:24.045 [2024-11-20 07:15:58.754916] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:24.045 [2024-11-20 07:15:58.754919] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.045 [2024-11-20 07:15:58.754925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:24.045 [2024-11-20 07:15:58.754932] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:24.045 [2024-11-20 07:15:58.754936] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:24.045 [2024-11-20 07:15:58.754940] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.045 [2024-11-20 07:15:58.754945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:24.045 [2024-11-20 07:15:58.754953] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:24.045 [2024-11-20 07:15:58.754957] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:24.045 [2024-11-20 07:15:58.754961] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:24.045 [2024-11-20 07:15:58.754967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:24.045 [2024-11-20 07:15:58.762869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:24.045 [2024-11-20 07:15:58.762883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:24.045 [2024-11-20 07:15:58.762894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:24.045 [2024-11-20 07:15:58.762903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:24.045 ===================================================== 00:15:24.045 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:24.045 ===================================================== 00:15:24.045 Controller Capabilities/Features 00:15:24.045 ================================ 00:15:24.045 Vendor ID: 4e58 00:15:24.045 Subsystem Vendor ID: 4e58 00:15:24.045 Serial Number: SPDK2 00:15:24.045 Model Number: SPDK bdev Controller 00:15:24.045 Firmware Version: 25.01 00:15:24.045 Recommended Arb Burst: 6 00:15:24.045 IEEE OUI Identifier: 8d 6b 50 00:15:24.045 Multi-path I/O 00:15:24.045 May have multiple subsystem ports: Yes 00:15:24.045 May have multiple controllers: Yes 00:15:24.045 Associated with SR-IOV VF: No 00:15:24.045 Max Data Transfer Size: 131072 00:15:24.045 Max Number of Namespaces: 32 00:15:24.045 Max Number of I/O Queues: 127 00:15:24.045 NVMe Specification Version (VS): 1.3 00:15:24.045 NVMe Specification Version (Identify): 1.3 00:15:24.045 Maximum Queue Entries: 256 00:15:24.045 Contiguous Queues Required: Yes 00:15:24.045 Arbitration Mechanisms Supported 00:15:24.045 Weighted Round Robin: Not Supported 00:15:24.045 Vendor Specific: Not Supported 00:15:24.045 Reset Timeout: 15000 ms 00:15:24.045 Doorbell Stride: 4 bytes 00:15:24.045 NVM Subsystem Reset: Not Supported 00:15:24.045 Command Sets Supported 00:15:24.045 NVM Command Set: Supported 00:15:24.045 Boot Partition: Not Supported 00:15:24.045 Memory Page Size Minimum: 4096 bytes 00:15:24.045 Memory Page Size Maximum: 4096 bytes 00:15:24.045 Persistent Memory Region: Not Supported 00:15:24.045 Optional Asynchronous Events Supported 00:15:24.045 Namespace Attribute Notices: Supported 00:15:24.045 Firmware Activation Notices: Not Supported 00:15:24.045 ANA Change Notices: Not Supported 00:15:24.045 PLE Aggregate Log Change Notices: Not Supported 00:15:24.045 LBA Status Info Alert Notices: Not Supported 00:15:24.045 EGE Aggregate Log Change Notices: Not Supported 00:15:24.045 Normal NVM Subsystem Shutdown event: Not Supported 00:15:24.045 Zone Descriptor Change Notices: Not Supported 00:15:24.045 Discovery Log Change Notices: Not Supported 00:15:24.045 Controller Attributes 00:15:24.045 128-bit Host Identifier: Supported 00:15:24.045 Non-Operational Permissive Mode: Not Supported 00:15:24.045 NVM Sets: Not Supported 00:15:24.045 Read Recovery Levels: Not Supported 00:15:24.045 Endurance Groups: Not Supported 00:15:24.045 Predictable Latency Mode: Not Supported 00:15:24.045 Traffic Based Keep ALive: Not Supported 00:15:24.045 Namespace Granularity: Not Supported 00:15:24.045 SQ Associations: Not Supported 00:15:24.045 UUID List: Not Supported 00:15:24.045 Multi-Domain Subsystem: Not Supported 00:15:24.045 Fixed Capacity Management: Not Supported 00:15:24.045 Variable Capacity Management: Not Supported 00:15:24.045 Delete Endurance Group: Not Supported 00:15:24.045 Delete NVM Set: Not Supported 00:15:24.045 Extended LBA Formats Supported: Not Supported 00:15:24.045 Flexible Data Placement Supported: Not Supported 00:15:24.045 00:15:24.045 Controller Memory Buffer Support 00:15:24.045 ================================ 00:15:24.045 Supported: No 00:15:24.045 00:15:24.045 Persistent Memory Region Support 00:15:24.045 ================================ 00:15:24.045 Supported: No 00:15:24.045 00:15:24.045 Admin Command Set Attributes 00:15:24.045 ============================ 00:15:24.045 Security Send/Receive: Not Supported 00:15:24.045 Format NVM: Not Supported 00:15:24.045 Firmware Activate/Download: Not Supported 00:15:24.045 Namespace Management: Not Supported 00:15:24.045 Device Self-Test: Not Supported 00:15:24.045 Directives: Not Supported 00:15:24.045 NVMe-MI: Not Supported 00:15:24.045 Virtualization Management: Not Supported 00:15:24.045 Doorbell Buffer Config: Not Supported 00:15:24.045 Get LBA Status Capability: Not Supported 00:15:24.045 Command & Feature Lockdown Capability: Not Supported 00:15:24.045 Abort Command Limit: 4 00:15:24.045 Async Event Request Limit: 4 00:15:24.045 Number of Firmware Slots: N/A 00:15:24.045 Firmware Slot 1 Read-Only: N/A 00:15:24.045 Firmware Activation Without Reset: N/A 00:15:24.045 Multiple Update Detection Support: N/A 00:15:24.045 Firmware Update Granularity: No Information Provided 00:15:24.045 Per-Namespace SMART Log: No 00:15:24.045 Asymmetric Namespace Access Log Page: Not Supported 00:15:24.045 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:24.045 Command Effects Log Page: Supported 00:15:24.045 Get Log Page Extended Data: Supported 00:15:24.045 Telemetry Log Pages: Not Supported 00:15:24.045 Persistent Event Log Pages: Not Supported 00:15:24.045 Supported Log Pages Log Page: May Support 00:15:24.045 Commands Supported & Effects Log Page: Not Supported 00:15:24.045 Feature Identifiers & Effects Log Page:May Support 00:15:24.045 NVMe-MI Commands & Effects Log Page: May Support 00:15:24.046 Data Area 4 for Telemetry Log: Not Supported 00:15:24.046 Error Log Page Entries Supported: 128 00:15:24.046 Keep Alive: Supported 00:15:24.046 Keep Alive Granularity: 10000 ms 00:15:24.046 00:15:24.046 NVM Command Set Attributes 00:15:24.046 ========================== 00:15:24.046 Submission Queue Entry Size 00:15:24.046 Max: 64 00:15:24.046 Min: 64 00:15:24.046 Completion Queue Entry Size 00:15:24.046 Max: 16 00:15:24.046 Min: 16 00:15:24.046 Number of Namespaces: 32 00:15:24.046 Compare Command: Supported 00:15:24.046 Write Uncorrectable Command: Not Supported 00:15:24.046 Dataset Management Command: Supported 00:15:24.046 Write Zeroes Command: Supported 00:15:24.046 Set Features Save Field: Not Supported 00:15:24.046 Reservations: Not Supported 00:15:24.046 Timestamp: Not Supported 00:15:24.046 Copy: Supported 00:15:24.046 Volatile Write Cache: Present 00:15:24.046 Atomic Write Unit (Normal): 1 00:15:24.046 Atomic Write Unit (PFail): 1 00:15:24.046 Atomic Compare & Write Unit: 1 00:15:24.046 Fused Compare & Write: Supported 00:15:24.046 Scatter-Gather List 00:15:24.046 SGL Command Set: Supported (Dword aligned) 00:15:24.046 SGL Keyed: Not Supported 00:15:24.046 SGL Bit Bucket Descriptor: Not Supported 00:15:24.046 SGL Metadata Pointer: Not Supported 00:15:24.046 Oversized SGL: Not Supported 00:15:24.046 SGL Metadata Address: Not Supported 00:15:24.046 SGL Offset: Not Supported 00:15:24.046 Transport SGL Data Block: Not Supported 00:15:24.046 Replay Protected Memory Block: Not Supported 00:15:24.046 00:15:24.046 Firmware Slot Information 00:15:24.046 ========================= 00:15:24.046 Active slot: 1 00:15:24.046 Slot 1 Firmware Revision: 25.01 00:15:24.046 00:15:24.046 00:15:24.046 Commands Supported and Effects 00:15:24.046 ============================== 00:15:24.046 Admin Commands 00:15:24.046 -------------- 00:15:24.046 Get Log Page (02h): Supported 00:15:24.046 Identify (06h): Supported 00:15:24.046 Abort (08h): Supported 00:15:24.046 Set Features (09h): Supported 00:15:24.046 Get Features (0Ah): Supported 00:15:24.046 Asynchronous Event Request (0Ch): Supported 00:15:24.046 Keep Alive (18h): Supported 00:15:24.046 I/O Commands 00:15:24.046 ------------ 00:15:24.046 Flush (00h): Supported LBA-Change 00:15:24.046 Write (01h): Supported LBA-Change 00:15:24.046 Read (02h): Supported 00:15:24.046 Compare (05h): Supported 00:15:24.046 Write Zeroes (08h): Supported LBA-Change 00:15:24.046 Dataset Management (09h): Supported LBA-Change 00:15:24.046 Copy (19h): Supported LBA-Change 00:15:24.046 00:15:24.046 Error Log 00:15:24.046 ========= 00:15:24.046 00:15:24.046 Arbitration 00:15:24.046 =========== 00:15:24.046 Arbitration Burst: 1 00:15:24.046 00:15:24.046 Power Management 00:15:24.046 ================ 00:15:24.046 Number of Power States: 1 00:15:24.046 Current Power State: Power State #0 00:15:24.046 Power State #0: 00:15:24.046 Max Power: 0.00 W 00:15:24.046 Non-Operational State: Operational 00:15:24.046 Entry Latency: Not Reported 00:15:24.046 Exit Latency: Not Reported 00:15:24.046 Relative Read Throughput: 0 00:15:24.046 Relative Read Latency: 0 00:15:24.046 Relative Write Throughput: 0 00:15:24.046 Relative Write Latency: 0 00:15:24.046 Idle Power: Not Reported 00:15:24.046 Active Power: Not Reported 00:15:24.046 Non-Operational Permissive Mode: Not Supported 00:15:24.046 00:15:24.046 Health Information 00:15:24.046 ================== 00:15:24.046 Critical Warnings: 00:15:24.046 Available Spare Space: OK 00:15:24.046 Temperature: OK 00:15:24.046 Device Reliability: OK 00:15:24.046 Read Only: No 00:15:24.046 Volatile Memory Backup: OK 00:15:24.046 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:24.046 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:24.046 Available Spare: 0% 00:15:24.046 Available Sp[2024-11-20 07:15:58.763002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:24.046 [2024-11-20 07:15:58.770868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:24.046 [2024-11-20 07:15:58.770898] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:24.046 [2024-11-20 07:15:58.770907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.046 [2024-11-20 07:15:58.770914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.046 [2024-11-20 07:15:58.770921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.046 [2024-11-20 07:15:58.770927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.046 [2024-11-20 07:15:58.770966] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:24.046 [2024-11-20 07:15:58.770977] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:24.046 [2024-11-20 07:15:58.771973] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.046 [2024-11-20 07:15:58.772022] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:24.046 [2024-11-20 07:15:58.772030] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:24.046 [2024-11-20 07:15:58.772982] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:24.046 [2024-11-20 07:15:58.772994] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:24.046 [2024-11-20 07:15:58.773042] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:24.046 [2024-11-20 07:15:58.775868] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:24.307 are Threshold: 0% 00:15:24.307 Life Percentage Used: 0% 00:15:24.307 Data Units Read: 0 00:15:24.307 Data Units Written: 0 00:15:24.307 Host Read Commands: 0 00:15:24.307 Host Write Commands: 0 00:15:24.307 Controller Busy Time: 0 minutes 00:15:24.307 Power Cycles: 0 00:15:24.307 Power On Hours: 0 hours 00:15:24.307 Unsafe Shutdowns: 0 00:15:24.307 Unrecoverable Media Errors: 0 00:15:24.307 Lifetime Error Log Entries: 0 00:15:24.307 Warning Temperature Time: 0 minutes 00:15:24.307 Critical Temperature Time: 0 minutes 00:15:24.307 00:15:24.307 Number of Queues 00:15:24.307 ================ 00:15:24.307 Number of I/O Submission Queues: 127 00:15:24.307 Number of I/O Completion Queues: 127 00:15:24.307 00:15:24.307 Active Namespaces 00:15:24.307 ================= 00:15:24.307 Namespace ID:1 00:15:24.307 Error Recovery Timeout: Unlimited 00:15:24.307 Command Set Identifier: NVM (00h) 00:15:24.307 Deallocate: Supported 00:15:24.308 Deallocated/Unwritten Error: Not Supported 00:15:24.308 Deallocated Read Value: Unknown 00:15:24.308 Deallocate in Write Zeroes: Not Supported 00:15:24.308 Deallocated Guard Field: 0xFFFF 00:15:24.308 Flush: Supported 00:15:24.308 Reservation: Supported 00:15:24.308 Namespace Sharing Capabilities: Multiple Controllers 00:15:24.308 Size (in LBAs): 131072 (0GiB) 00:15:24.308 Capacity (in LBAs): 131072 (0GiB) 00:15:24.308 Utilization (in LBAs): 131072 (0GiB) 00:15:24.308 NGUID: 0BC7A7E2F0B4448E9FC06E7E636E38C3 00:15:24.308 UUID: 0bc7a7e2-f0b4-448e-9fc0-6e7e636e38c3 00:15:24.308 Thin Provisioning: Not Supported 00:15:24.308 Per-NS Atomic Units: Yes 00:15:24.308 Atomic Boundary Size (Normal): 0 00:15:24.308 Atomic Boundary Size (PFail): 0 00:15:24.308 Atomic Boundary Offset: 0 00:15:24.308 Maximum Single Source Range Length: 65535 00:15:24.308 Maximum Copy Length: 65535 00:15:24.308 Maximum Source Range Count: 1 00:15:24.308 NGUID/EUI64 Never Reused: No 00:15:24.308 Namespace Write Protected: No 00:15:24.308 Number of LBA Formats: 1 00:15:24.308 Current LBA Format: LBA Format #00 00:15:24.308 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:24.308 00:15:24.308 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:24.308 [2024-11-20 07:15:58.979241] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.603 Initializing NVMe Controllers 00:15:29.603 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.603 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:29.603 Initialization complete. Launching workers. 00:15:29.603 ======================================================== 00:15:29.603 Latency(us) 00:15:29.603 Device Information : IOPS MiB/s Average min max 00:15:29.603 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40032.80 156.38 3197.24 837.36 9795.51 00:15:29.603 ======================================================== 00:15:29.603 Total : 40032.80 156.38 3197.24 837.36 9795.51 00:15:29.603 00:15:29.603 [2024-11-20 07:16:04.081064] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.603 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:29.603 [2024-11-20 07:16:04.272642] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.893 Initializing NVMe Controllers 00:15:34.893 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.893 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:34.893 Initialization complete. Launching workers. 00:15:34.893 ======================================================== 00:15:34.893 Latency(us) 00:15:34.893 Device Information : IOPS MiB/s Average min max 00:15:34.893 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35287.86 137.84 3627.06 1103.91 7641.96 00:15:34.893 ======================================================== 00:15:34.893 Total : 35287.86 137.84 3627.06 1103.91 7641.96 00:15:34.893 00:15:34.893 [2024-11-20 07:16:09.293610] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.893 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:34.893 [2024-11-20 07:16:09.511241] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.184 [2024-11-20 07:16:14.657950] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.184 Initializing NVMe Controllers 00:15:40.184 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:40.184 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:40.184 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:40.184 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:40.184 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:40.184 Initialization complete. Launching workers. 00:15:40.184 Starting thread on core 2 00:15:40.184 Starting thread on core 3 00:15:40.184 Starting thread on core 1 00:15:40.184 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:40.445 [2024-11-20 07:16:14.950011] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.747 [2024-11-20 07:16:18.012042] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.747 Initializing NVMe Controllers 00:15:43.747 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.747 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.747 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:43.747 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:43.747 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:43.747 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:43.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:43.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:43.747 Initialization complete. Launching workers. 00:15:43.747 Starting thread on core 1 with urgent priority queue 00:15:43.747 Starting thread on core 2 with urgent priority queue 00:15:43.747 Starting thread on core 3 with urgent priority queue 00:15:43.747 Starting thread on core 0 with urgent priority queue 00:15:43.747 SPDK bdev Controller (SPDK2 ) core 0: 8774.00 IO/s 11.40 secs/100000 ios 00:15:43.747 SPDK bdev Controller (SPDK2 ) core 1: 6309.67 IO/s 15.85 secs/100000 ios 00:15:43.747 SPDK bdev Controller (SPDK2 ) core 2: 9867.00 IO/s 10.13 secs/100000 ios 00:15:43.747 SPDK bdev Controller (SPDK2 ) core 3: 5574.00 IO/s 17.94 secs/100000 ios 00:15:43.747 ======================================================== 00:15:43.747 00:15:43.747 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:43.747 [2024-11-20 07:16:18.310303] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.747 Initializing NVMe Controllers 00:15:43.747 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.747 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.747 Namespace ID: 1 size: 0GB 00:15:43.747 Initialization complete. 00:15:43.747 INFO: using host memory buffer for IO 00:15:43.747 Hello world! 00:15:43.747 [2024-11-20 07:16:18.320355] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.747 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:44.008 [2024-11-20 07:16:18.618146] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:44.975 Initializing NVMe Controllers 00:15:44.975 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:44.975 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:44.975 Initialization complete. Launching workers. 00:15:44.975 submit (in ns) avg, min, max = 7501.0, 3980.0, 3999050.0 00:15:44.975 complete (in ns) avg, min, max = 18227.9, 2387.5, 6989640.0 00:15:44.975 00:15:44.975 Submit histogram 00:15:44.975 ================ 00:15:44.975 Range in us Cumulative Count 00:15:44.975 3.973 - 4.000: 0.1675% ( 32) 00:15:44.975 4.000 - 4.027: 1.6387% ( 281) 00:15:44.975 4.027 - 4.053: 7.6387% ( 1146) 00:15:44.975 4.053 - 4.080: 17.4869% ( 1881) 00:15:44.975 4.080 - 4.107: 29.6440% ( 2322) 00:15:44.975 4.107 - 4.133: 41.1885% ( 2205) 00:15:44.975 4.133 - 4.160: 55.5497% ( 2743) 00:15:44.975 4.160 - 4.187: 72.6859% ( 3273) 00:15:44.975 4.187 - 4.213: 85.6335% ( 2473) 00:15:44.975 4.213 - 4.240: 94.1309% ( 1623) 00:15:44.975 4.240 - 4.267: 98.0157% ( 742) 00:15:44.975 4.267 - 4.293: 99.1937% ( 225) 00:15:44.975 4.293 - 4.320: 99.4293% ( 45) 00:15:44.975 4.320 - 4.347: 99.4607% ( 6) 00:15:44.975 4.347 - 4.373: 99.4764% ( 3) 00:15:44.975 4.400 - 4.427: 99.4817% ( 1) 00:15:44.975 4.480 - 4.507: 99.4869% ( 1) 00:15:44.975 4.853 - 4.880: 99.4921% ( 1) 00:15:44.975 4.933 - 4.960: 99.4974% ( 1) 00:15:44.975 5.200 - 5.227: 99.5026% ( 1) 00:15:44.975 5.360 - 5.387: 99.5079% ( 1) 00:15:44.975 5.520 - 5.547: 99.5131% ( 1) 00:15:44.975 5.680 - 5.707: 99.5183% ( 1) 00:15:44.975 5.707 - 5.733: 99.5236% ( 1) 00:15:44.975 5.813 - 5.840: 99.5288% ( 1) 00:15:44.975 5.840 - 5.867: 99.5340% ( 1) 00:15:44.975 5.920 - 5.947: 99.5393% ( 1) 00:15:44.975 6.000 - 6.027: 99.5445% ( 1) 00:15:44.975 6.027 - 6.053: 99.5497% ( 1) 00:15:44.975 6.053 - 6.080: 99.5602% ( 2) 00:15:44.975 6.133 - 6.160: 99.5707% ( 2) 00:15:44.975 6.160 - 6.187: 99.5812% ( 2) 00:15:44.975 6.187 - 6.213: 99.5969% ( 3) 00:15:44.975 6.213 - 6.240: 99.6126% ( 3) 00:15:44.975 6.240 - 6.267: 99.6178% ( 1) 00:15:44.975 6.267 - 6.293: 99.6387% ( 4) 00:15:44.975 6.293 - 6.320: 99.6440% ( 1) 00:15:44.975 6.320 - 6.347: 99.6545% ( 2) 00:15:44.975 6.347 - 6.373: 99.6597% ( 1) 00:15:44.975 6.373 - 6.400: 99.6649% ( 1) 00:15:44.975 6.427 - 6.453: 99.6806% ( 3) 00:15:44.975 6.453 - 6.480: 99.6911% ( 2) 00:15:44.975 6.480 - 6.507: 99.6963% ( 1) 00:15:44.975 6.507 - 6.533: 99.7120% ( 3) 00:15:44.975 6.533 - 6.560: 99.7173% ( 1) 00:15:44.975 6.560 - 6.587: 99.7330% ( 3) 00:15:44.975 6.587 - 6.613: 99.7382% ( 1) 00:15:44.975 6.613 - 6.640: 99.7539% ( 3) 00:15:44.975 6.640 - 6.667: 99.7644% ( 2) 00:15:44.975 6.667 - 6.693: 99.7801% ( 3) 00:15:44.975 6.693 - 6.720: 99.7853% ( 1) 00:15:44.975 6.720 - 6.747: 99.7906% ( 1) 00:15:44.975 6.773 - 6.800: 99.8010% ( 2) 00:15:44.975 6.800 - 6.827: 99.8063% ( 1) 00:15:44.975 6.827 - 6.880: 99.8168% ( 2) 00:15:44.975 6.880 - 6.933: 99.8377% ( 4) 00:15:44.975 6.933 - 6.987: 99.8429% ( 1) 00:15:44.975 6.987 - 7.040: 99.8639% ( 4) 00:15:44.975 7.040 - 7.093: 99.8691% ( 1) 00:15:44.975 7.093 - 7.147: 99.8796% ( 2) 00:15:44.975 7.200 - 7.253: 99.8848% ( 1) 00:15:44.975 7.520 - 7.573: 99.8901% ( 1) 00:15:44.975 7.733 - 7.787: 99.8953% ( 1) 00:15:44.975 8.000 - 8.053: 99.9005% ( 1) 00:15:44.975 8.320 - 8.373: 99.9058% ( 1) 00:15:44.975 8.747 - 8.800: 99.9110% ( 1) 00:15:44.975 9.813 - 9.867: 99.9162% ( 1) 00:15:44.975 3986.773 - 4014.080: 100.0000% ( 16) 00:15:44.975 00:15:44.975 Complete histogram 00:15:44.975 ================== 00:15:44.975 Range in us Cumulative Count 00:15:44.975 2.387 - 2.400: 0.0105% ( 2) 00:15:44.975 2.400 - 2.413: 0.6806% ( 128) 00:15:44.975 2.413 - 2.427: 1.1832% ( 96) 00:15:44.975 2.427 - 2.440: 1.3665% ( 35) 00:15:44.975 2.440 - 2.453: 59.9686% ( 11193) 00:15:44.975 2.453 - 2.467: 63.0942% ( 597) 00:15:44.975 2.467 - 2.480: 75.5812% ( 2385) 00:15:44.975 2.480 - 2.493: 79.3874% ( 727) 00:15:44.975 2.493 - 2.507: 81.5916% ( 421) 00:15:44.975 2.507 - [2024-11-20 07:16:19.714512] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:45.289 2.520: 85.3822% ( 724) 00:15:45.289 2.520 - 2.533: 92.2880% ( 1319) 00:15:45.289 2.533 - 2.547: 96.2565% ( 758) 00:15:45.289 2.547 - 2.560: 97.8220% ( 299) 00:15:45.289 2.560 - 2.573: 98.8534% ( 197) 00:15:45.289 2.573 - 2.587: 99.2565% ( 77) 00:15:45.289 2.587 - 2.600: 99.3403% ( 16) 00:15:45.289 2.653 - 2.667: 99.3455% ( 1) 00:15:45.289 4.240 - 4.267: 99.3508% ( 1) 00:15:45.289 4.293 - 4.320: 99.3560% ( 1) 00:15:45.289 4.320 - 4.347: 99.3613% ( 1) 00:15:45.289 4.400 - 4.427: 99.3665% ( 1) 00:15:45.289 4.453 - 4.480: 99.3717% ( 1) 00:15:45.289 4.480 - 4.507: 99.3770% ( 1) 00:15:45.289 4.507 - 4.533: 99.3822% ( 1) 00:15:45.289 4.613 - 4.640: 99.3927% ( 2) 00:15:45.289 4.640 - 4.667: 99.4084% ( 3) 00:15:45.289 4.720 - 4.747: 99.4188% ( 2) 00:15:45.289 4.747 - 4.773: 99.4398% ( 4) 00:15:45.289 4.800 - 4.827: 99.4450% ( 1) 00:15:45.289 4.853 - 4.880: 99.4503% ( 1) 00:15:45.289 4.880 - 4.907: 99.4555% ( 1) 00:15:45.289 5.040 - 5.067: 99.4607% ( 1) 00:15:45.289 5.093 - 5.120: 99.4660% ( 1) 00:15:45.289 5.120 - 5.147: 99.4712% ( 1) 00:15:45.289 5.147 - 5.173: 99.4869% ( 3) 00:15:45.289 5.173 - 5.200: 99.4921% ( 1) 00:15:45.289 5.200 - 5.227: 99.4974% ( 1) 00:15:45.289 5.227 - 5.253: 99.5079% ( 2) 00:15:45.289 5.253 - 5.280: 99.5131% ( 1) 00:15:45.289 5.307 - 5.333: 99.5183% ( 1) 00:15:45.289 5.440 - 5.467: 99.5288% ( 2) 00:15:45.289 5.493 - 5.520: 99.5340% ( 1) 00:15:45.289 5.547 - 5.573: 99.5445% ( 2) 00:15:45.289 5.573 - 5.600: 99.5497% ( 1) 00:15:45.289 5.760 - 5.787: 99.5550% ( 1) 00:15:45.289 5.813 - 5.840: 99.5602% ( 1) 00:15:45.289 6.053 - 6.080: 99.5654% ( 1) 00:15:45.289 6.773 - 6.800: 99.5707% ( 1) 00:15:45.289 9.333 - 9.387: 99.5759% ( 1) 00:15:45.289 9.760 - 9.813: 99.5812% ( 1) 00:15:45.289 12.747 - 12.800: 99.5864% ( 1) 00:15:45.289 12.960 - 13.013: 99.5916% ( 1) 00:15:45.289 43.520 - 43.733: 99.5969% ( 1) 00:15:45.289 166.400 - 167.253: 99.6021% ( 1) 00:15:45.289 1003.520 - 1010.347: 99.6073% ( 1) 00:15:45.289 1099.093 - 1105.920: 99.6126% ( 1) 00:15:45.289 3986.773 - 4014.080: 99.9948% ( 73) 00:15:45.289 6963.200 - 6990.507: 100.0000% ( 1) 00:15:45.289 00:15:45.289 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:45.289 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:45.289 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:45.289 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:45.289 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:45.289 [ 00:15:45.289 { 00:15:45.289 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:45.289 "subtype": "Discovery", 00:15:45.289 "listen_addresses": [], 00:15:45.289 "allow_any_host": true, 00:15:45.289 "hosts": [] 00:15:45.289 }, 00:15:45.289 { 00:15:45.289 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:45.289 "subtype": "NVMe", 00:15:45.289 "listen_addresses": [ 00:15:45.289 { 00:15:45.289 "trtype": "VFIOUSER", 00:15:45.289 "adrfam": "IPv4", 00:15:45.289 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:45.289 "trsvcid": "0" 00:15:45.289 } 00:15:45.289 ], 00:15:45.289 "allow_any_host": true, 00:15:45.289 "hosts": [], 00:15:45.289 "serial_number": "SPDK1", 00:15:45.289 "model_number": "SPDK bdev Controller", 00:15:45.289 "max_namespaces": 32, 00:15:45.289 "min_cntlid": 1, 00:15:45.289 "max_cntlid": 65519, 00:15:45.289 "namespaces": [ 00:15:45.289 { 00:15:45.289 "nsid": 1, 00:15:45.289 "bdev_name": "Malloc1", 00:15:45.289 "name": "Malloc1", 00:15:45.289 "nguid": "27681B81178A4E69B2BE0C332743942D", 00:15:45.289 "uuid": "27681b81-178a-4e69-b2be-0c332743942d" 00:15:45.289 }, 00:15:45.289 { 00:15:45.289 "nsid": 2, 00:15:45.289 "bdev_name": "Malloc3", 00:15:45.289 "name": "Malloc3", 00:15:45.289 "nguid": "8E5872E0887C4500B5E9CBF2D980D3CA", 00:15:45.289 "uuid": "8e5872e0-887c-4500-b5e9-cbf2d980d3ca" 00:15:45.289 } 00:15:45.289 ] 00:15:45.289 }, 00:15:45.289 { 00:15:45.289 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:45.289 "subtype": "NVMe", 00:15:45.289 "listen_addresses": [ 00:15:45.289 { 00:15:45.289 "trtype": "VFIOUSER", 00:15:45.289 "adrfam": "IPv4", 00:15:45.289 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:45.289 "trsvcid": "0" 00:15:45.289 } 00:15:45.289 ], 00:15:45.289 "allow_any_host": true, 00:15:45.289 "hosts": [], 00:15:45.289 "serial_number": "SPDK2", 00:15:45.289 "model_number": "SPDK bdev Controller", 00:15:45.289 "max_namespaces": 32, 00:15:45.289 "min_cntlid": 1, 00:15:45.289 "max_cntlid": 65519, 00:15:45.289 "namespaces": [ 00:15:45.289 { 00:15:45.289 "nsid": 1, 00:15:45.289 "bdev_name": "Malloc2", 00:15:45.289 "name": "Malloc2", 00:15:45.289 "nguid": "0BC7A7E2F0B4448E9FC06E7E636E38C3", 00:15:45.289 "uuid": "0bc7a7e2-f0b4-448e-9fc0-6e7e636e38c3" 00:15:45.289 } 00:15:45.289 ] 00:15:45.289 } 00:15:45.289 ] 00:15:45.289 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:45.289 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1240639 00:15:45.289 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:45.289 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:45.290 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:45.290 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:45.290 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:45.290 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:45.290 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:45.290 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:45.611 Malloc4 00:15:45.611 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:45.611 [2024-11-20 07:16:20.152296] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:45.611 [2024-11-20 07:16:20.285185] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:45.611 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:45.611 Asynchronous Event Request test 00:15:45.611 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.611 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.611 Registering asynchronous event callbacks... 00:15:45.611 Starting namespace attribute notice tests for all controllers... 00:15:45.611 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:45.611 aer_cb - Changed Namespace 00:15:45.611 Cleaning up... 00:15:45.878 [ 00:15:45.878 { 00:15:45.878 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:45.878 "subtype": "Discovery", 00:15:45.878 "listen_addresses": [], 00:15:45.878 "allow_any_host": true, 00:15:45.878 "hosts": [] 00:15:45.878 }, 00:15:45.878 { 00:15:45.878 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:45.878 "subtype": "NVMe", 00:15:45.878 "listen_addresses": [ 00:15:45.878 { 00:15:45.878 "trtype": "VFIOUSER", 00:15:45.878 "adrfam": "IPv4", 00:15:45.878 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:45.878 "trsvcid": "0" 00:15:45.878 } 00:15:45.878 ], 00:15:45.878 "allow_any_host": true, 00:15:45.878 "hosts": [], 00:15:45.878 "serial_number": "SPDK1", 00:15:45.878 "model_number": "SPDK bdev Controller", 00:15:45.878 "max_namespaces": 32, 00:15:45.878 "min_cntlid": 1, 00:15:45.878 "max_cntlid": 65519, 00:15:45.878 "namespaces": [ 00:15:45.878 { 00:15:45.878 "nsid": 1, 00:15:45.878 "bdev_name": "Malloc1", 00:15:45.878 "name": "Malloc1", 00:15:45.878 "nguid": "27681B81178A4E69B2BE0C332743942D", 00:15:45.878 "uuid": "27681b81-178a-4e69-b2be-0c332743942d" 00:15:45.878 }, 00:15:45.878 { 00:15:45.878 "nsid": 2, 00:15:45.878 "bdev_name": "Malloc3", 00:15:45.878 "name": "Malloc3", 00:15:45.878 "nguid": "8E5872E0887C4500B5E9CBF2D980D3CA", 00:15:45.878 "uuid": "8e5872e0-887c-4500-b5e9-cbf2d980d3ca" 00:15:45.878 } 00:15:45.878 ] 00:15:45.878 }, 00:15:45.878 { 00:15:45.878 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:45.878 "subtype": "NVMe", 00:15:45.878 "listen_addresses": [ 00:15:45.878 { 00:15:45.878 "trtype": "VFIOUSER", 00:15:45.878 "adrfam": "IPv4", 00:15:45.878 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:45.878 "trsvcid": "0" 00:15:45.878 } 00:15:45.878 ], 00:15:45.878 "allow_any_host": true, 00:15:45.878 "hosts": [], 00:15:45.878 "serial_number": "SPDK2", 00:15:45.878 "model_number": "SPDK bdev Controller", 00:15:45.878 "max_namespaces": 32, 00:15:45.878 "min_cntlid": 1, 00:15:45.878 "max_cntlid": 65519, 00:15:45.878 "namespaces": [ 00:15:45.878 { 00:15:45.878 "nsid": 1, 00:15:45.878 "bdev_name": "Malloc2", 00:15:45.878 "name": "Malloc2", 00:15:45.878 "nguid": "0BC7A7E2F0B4448E9FC06E7E636E38C3", 00:15:45.878 "uuid": "0bc7a7e2-f0b4-448e-9fc0-6e7e636e38c3" 00:15:45.878 }, 00:15:45.878 { 00:15:45.878 "nsid": 2, 00:15:45.878 "bdev_name": "Malloc4", 00:15:45.878 "name": "Malloc4", 00:15:45.878 "nguid": "624B495228AE42E290E458F5FBC1F90F", 00:15:45.878 "uuid": "624b4952-28ae-42e2-90e4-58f5fbc1f90f" 00:15:45.878 } 00:15:45.878 ] 00:15:45.878 } 00:15:45.878 ] 00:15:45.878 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1240639 00:15:45.878 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:45.878 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1231481 00:15:45.878 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 1231481 ']' 00:15:45.878 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 1231481 00:15:45.878 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:45.878 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:45.878 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1231481 00:15:45.878 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:45.878 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:45.878 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1231481' 00:15:45.878 killing process with pid 1231481 00:15:45.878 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 1231481 00:15:45.878 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 1231481 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1240907 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1240907' 00:15:46.140 Process pid: 1240907 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1240907 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 1240907 ']' 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:46.140 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:46.140 [2024-11-20 07:16:20.780320] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:46.140 [2024-11-20 07:16:20.781012] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:15:46.140 [2024-11-20 07:16:20.781050] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.140 [2024-11-20 07:16:20.849859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.140 [2024-11-20 07:16:20.884610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.140 [2024-11-20 07:16:20.884642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.140 [2024-11-20 07:16:20.884649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.140 [2024-11-20 07:16:20.884656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.140 [2024-11-20 07:16:20.884662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.140 [2024-11-20 07:16:20.886159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.140 [2024-11-20 07:16:20.886272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.140 [2024-11-20 07:16:20.886425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.140 [2024-11-20 07:16:20.886426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.402 [2024-11-20 07:16:20.942690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:46.402 [2024-11-20 07:16:20.942690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:46.402 [2024-11-20 07:16:20.943817] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:46.402 [2024-11-20 07:16:20.944548] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:46.402 [2024-11-20 07:16:20.944629] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:46.973 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:46.973 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:15:46.973 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:47.915 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:48.176 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:48.176 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:48.176 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:48.176 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:48.176 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:48.436 Malloc1 00:15:48.436 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:48.436 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:48.696 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:48.956 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:48.956 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:48.956 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:48.956 Malloc2 00:15:49.215 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:49.215 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:49.475 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1240907 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 1240907 ']' 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 1240907 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1240907 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1240907' 00:15:49.735 killing process with pid 1240907 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 1240907 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 1240907 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:49.735 00:15:49.735 real 0m51.895s 00:15:49.735 user 3m19.004s 00:15:49.735 sys 0m2.750s 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:49.735 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:49.735 ************************************ 00:15:49.735 END TEST nvmf_vfio_user 00:15:49.735 ************************************ 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:49.996 ************************************ 00:15:49.996 START TEST nvmf_vfio_user_nvme_compliance 00:15:49.996 ************************************ 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:49.996 * Looking for test storage... 00:15:49.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.996 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:49.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.997 --rc genhtml_branch_coverage=1 00:15:49.997 --rc genhtml_function_coverage=1 00:15:49.997 --rc genhtml_legend=1 00:15:49.997 --rc geninfo_all_blocks=1 00:15:49.997 --rc geninfo_unexecuted_blocks=1 00:15:49.997 00:15:49.997 ' 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:49.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.997 --rc genhtml_branch_coverage=1 00:15:49.997 --rc genhtml_function_coverage=1 00:15:49.997 --rc genhtml_legend=1 00:15:49.997 --rc geninfo_all_blocks=1 00:15:49.997 --rc geninfo_unexecuted_blocks=1 00:15:49.997 00:15:49.997 ' 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:49.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.997 --rc genhtml_branch_coverage=1 00:15:49.997 --rc genhtml_function_coverage=1 00:15:49.997 --rc genhtml_legend=1 00:15:49.997 --rc geninfo_all_blocks=1 00:15:49.997 --rc geninfo_unexecuted_blocks=1 00:15:49.997 00:15:49.997 ' 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:49.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.997 --rc genhtml_branch_coverage=1 00:15:49.997 --rc genhtml_function_coverage=1 00:15:49.997 --rc genhtml_legend=1 00:15:49.997 --rc geninfo_all_blocks=1 00:15:49.997 --rc geninfo_unexecuted_blocks=1 00:15:49.997 00:15:49.997 ' 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.997 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:50.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1241667 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1241667' 00:15:50.259 Process pid: 1241667 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1241667 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 1241667 ']' 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:50.259 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.259 [2024-11-20 07:16:24.828402] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:15:50.259 [2024-11-20 07:16:24.828454] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.259 [2024-11-20 07:16:24.907005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:50.259 [2024-11-20 07:16:24.942197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.259 [2024-11-20 07:16:24.942231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.259 [2024-11-20 07:16:24.942240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.259 [2024-11-20 07:16:24.942246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.259 [2024-11-20 07:16:24.942252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.259 [2024-11-20 07:16:24.943626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.259 [2024-11-20 07:16:24.943741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.259 [2024-11-20 07:16:24.943744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.203 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:51.203 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:15:51.203 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.146 malloc0 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.146 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:52.146 00:15:52.146 00:15:52.146 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.146 http://cunit.sourceforge.net/ 00:15:52.146 00:15:52.146 00:15:52.146 Suite: nvme_compliance 00:15:52.407 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 07:16:26.912316] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.407 [2024-11-20 07:16:26.913667] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:52.407 [2024-11-20 07:16:26.913681] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:52.407 [2024-11-20 07:16:26.913685] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:52.407 [2024-11-20 07:16:26.915333] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.407 passed 00:15:52.407 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 07:16:27.007900] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.407 [2024-11-20 07:16:27.010930] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.407 passed 00:15:52.407 Test: admin_identify_ns ...[2024-11-20 07:16:27.107095] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.407 [2024-11-20 07:16:27.166882] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:52.667 [2024-11-20 07:16:27.174877] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:52.667 [2024-11-20 07:16:27.199010] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.667 passed 00:15:52.667 Test: admin_get_features_mandatory_features ...[2024-11-20 07:16:27.289631] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.667 [2024-11-20 07:16:27.292647] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.667 passed 00:15:52.667 Test: admin_get_features_optional_features ...[2024-11-20 07:16:27.387166] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.668 [2024-11-20 07:16:27.390179] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.668 passed 00:15:52.928 Test: admin_set_features_number_of_queues ...[2024-11-20 07:16:27.483317] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.928 [2024-11-20 07:16:27.587968] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.928 passed 00:15:52.928 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 07:16:27.681593] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.928 [2024-11-20 07:16:27.684607] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.187 passed 00:15:53.187 Test: admin_get_log_page_with_lpo ...[2024-11-20 07:16:27.776733] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.187 [2024-11-20 07:16:27.842876] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:53.187 [2024-11-20 07:16:27.855922] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.187 passed 00:15:53.187 Test: fabric_property_get ...[2024-11-20 07:16:27.947972] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.187 [2024-11-20 07:16:27.949227] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:53.187 [2024-11-20 07:16:27.951001] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.447 passed 00:15:53.447 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 07:16:28.047638] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.447 [2024-11-20 07:16:28.048887] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:53.447 [2024-11-20 07:16:28.050663] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.447 passed 00:15:53.447 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 07:16:28.144119] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.708 [2024-11-20 07:16:28.227871] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:53.708 [2024-11-20 07:16:28.243868] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:53.708 [2024-11-20 07:16:28.248954] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.708 passed 00:15:53.708 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 07:16:28.340541] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.708 [2024-11-20 07:16:28.341779] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:53.708 [2024-11-20 07:16:28.343558] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.708 passed 00:15:53.708 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 07:16:28.435104] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.970 [2024-11-20 07:16:28.514868] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:53.970 [2024-11-20 07:16:28.538880] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:53.970 [2024-11-20 07:16:28.543953] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.970 passed 00:15:53.970 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 07:16:28.634593] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.970 [2024-11-20 07:16:28.635843] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:53.970 [2024-11-20 07:16:28.635865] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:53.970 [2024-11-20 07:16:28.637612] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.970 passed 00:15:53.970 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 07:16:28.730749] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.232 [2024-11-20 07:16:28.818871] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:54.232 [2024-11-20 07:16:28.828868] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:54.232 [2024-11-20 07:16:28.836874] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:54.232 [2024-11-20 07:16:28.844870] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:54.232 [2024-11-20 07:16:28.873955] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.232 passed 00:15:54.232 Test: admin_create_io_sq_verify_pc ...[2024-11-20 07:16:28.965558] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.232 [2024-11-20 07:16:28.981875] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:54.493 [2024-11-20 07:16:28.999711] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.493 passed 00:15:54.493 Test: admin_create_io_qp_max_qps ...[2024-11-20 07:16:29.094217] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.879 [2024-11-20 07:16:30.205875] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:55.879 [2024-11-20 07:16:30.585840] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.879 passed 00:15:56.140 Test: admin_create_io_sq_shared_cq ...[2024-11-20 07:16:30.678123] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:56.140 [2024-11-20 07:16:30.809878] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:56.140 [2024-11-20 07:16:30.841927] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:56.140 passed 00:15:56.140 00:15:56.140 Run Summary: Type Total Ran Passed Failed Inactive 00:15:56.140 suites 1 1 n/a 0 0 00:15:56.140 tests 18 18 18 0 0 00:15:56.140 asserts 360 360 360 0 n/a 00:15:56.140 00:15:56.140 Elapsed time = 1.648 seconds 00:15:56.140 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1241667 00:15:56.140 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 1241667 ']' 00:15:56.140 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 1241667 00:15:56.140 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:15:56.140 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:56.140 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1241667 00:15:56.401 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:56.401 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:56.401 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1241667' 00:15:56.401 killing process with pid 1241667 00:15:56.401 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 1241667 00:15:56.401 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 1241667 00:15:56.401 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:56.401 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:56.401 00:15:56.401 real 0m6.548s 00:15:56.401 user 0m18.615s 00:15:56.401 sys 0m0.541s 00:15:56.401 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:56.401 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:56.401 ************************************ 00:15:56.401 END TEST nvmf_vfio_user_nvme_compliance 00:15:56.401 ************************************ 00:15:56.401 07:16:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:56.401 07:16:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:56.401 07:16:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:56.401 07:16:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:56.663 ************************************ 00:15:56.663 START TEST nvmf_vfio_user_fuzz 00:15:56.663 ************************************ 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:56.663 * Looking for test storage... 00:15:56.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:56.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.663 --rc genhtml_branch_coverage=1 00:15:56.663 --rc genhtml_function_coverage=1 00:15:56.663 --rc genhtml_legend=1 00:15:56.663 --rc geninfo_all_blocks=1 00:15:56.663 --rc geninfo_unexecuted_blocks=1 00:15:56.663 00:15:56.663 ' 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:56.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.663 --rc genhtml_branch_coverage=1 00:15:56.663 --rc genhtml_function_coverage=1 00:15:56.663 --rc genhtml_legend=1 00:15:56.663 --rc geninfo_all_blocks=1 00:15:56.663 --rc geninfo_unexecuted_blocks=1 00:15:56.663 00:15:56.663 ' 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:56.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.663 --rc genhtml_branch_coverage=1 00:15:56.663 --rc genhtml_function_coverage=1 00:15:56.663 --rc genhtml_legend=1 00:15:56.663 --rc geninfo_all_blocks=1 00:15:56.663 --rc geninfo_unexecuted_blocks=1 00:15:56.663 00:15:56.663 ' 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:56.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.663 --rc genhtml_branch_coverage=1 00:15:56.663 --rc genhtml_function_coverage=1 00:15:56.663 --rc genhtml_legend=1 00:15:56.663 --rc geninfo_all_blocks=1 00:15:56.663 --rc geninfo_unexecuted_blocks=1 00:15:56.663 00:15:56.663 ' 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.663 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:56.664 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1243074 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1243074' 00:15:56.664 Process pid: 1243074 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1243074 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 1243074 ']' 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:56.664 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.607 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:57.607 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:15:57.607 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.549 malloc0 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.549 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.809 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.809 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:58.809 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.809 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.809 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.809 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:58.809 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:30.934 Fuzzing completed. Shutting down the fuzz application 00:16:30.934 00:16:30.934 Dumping successful admin opcodes: 00:16:30.934 8, 9, 10, 24, 00:16:30.934 Dumping successful io opcodes: 00:16:30.934 0, 00:16:30.934 NS: 0x20000081ef00 I/O qp, Total commands completed: 1221479, total successful commands: 4787, random_seed: 273475072 00:16:30.934 NS: 0x20000081ef00 admin qp, Total commands completed: 153508, total successful commands: 1236, random_seed: 3769258240 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1243074 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 1243074 ']' 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 1243074 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1243074 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1243074' 00:16:30.934 killing process with pid 1243074 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 1243074 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 1243074 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:30.934 00:16:30.934 real 0m33.802s 00:16:30.934 user 0m40.070s 00:16:30.934 sys 0m24.670s 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:30.934 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:30.934 ************************************ 00:16:30.934 END TEST nvmf_vfio_user_fuzz 00:16:30.934 ************************************ 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.934 ************************************ 00:16:30.934 START TEST nvmf_auth_target 00:16:30.934 ************************************ 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:30.934 * Looking for test storage... 00:16:30.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:30.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.934 --rc genhtml_branch_coverage=1 00:16:30.934 --rc genhtml_function_coverage=1 00:16:30.934 --rc genhtml_legend=1 00:16:30.934 --rc geninfo_all_blocks=1 00:16:30.934 --rc geninfo_unexecuted_blocks=1 00:16:30.934 00:16:30.934 ' 00:16:30.934 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:30.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.935 --rc genhtml_branch_coverage=1 00:16:30.935 --rc genhtml_function_coverage=1 00:16:30.935 --rc genhtml_legend=1 00:16:30.935 --rc geninfo_all_blocks=1 00:16:30.935 --rc geninfo_unexecuted_blocks=1 00:16:30.935 00:16:30.935 ' 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:30.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.935 --rc genhtml_branch_coverage=1 00:16:30.935 --rc genhtml_function_coverage=1 00:16:30.935 --rc genhtml_legend=1 00:16:30.935 --rc geninfo_all_blocks=1 00:16:30.935 --rc geninfo_unexecuted_blocks=1 00:16:30.935 00:16:30.935 ' 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:30.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.935 --rc genhtml_branch_coverage=1 00:16:30.935 --rc genhtml_function_coverage=1 00:16:30.935 --rc genhtml_legend=1 00:16:30.935 --rc geninfo_all_blocks=1 00:16:30.935 --rc geninfo_unexecuted_blocks=1 00:16:30.935 00:16:30.935 ' 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:30.935 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:39.082 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:39.082 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:39.083 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:39.083 Found net devices under 0000:31:00.0: cvl_0_0 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:39.083 Found net devices under 0000:31:00.1: cvl_0_1 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:39.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:16:39.083 00:16:39.083 --- 10.0.0.2 ping statistics --- 00:16:39.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.083 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:16:39.083 00:16:39.083 --- 10.0.0.1 ping statistics --- 00:16:39.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.083 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.083 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1253763 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1253763 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1253763 ']' 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:39.084 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1254091 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bc699aaaa61a69c9ae47af17269207ef6fdf9b6839a146b7 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nbJ 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bc699aaaa61a69c9ae47af17269207ef6fdf9b6839a146b7 0 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bc699aaaa61a69c9ae47af17269207ef6fdf9b6839a146b7 0 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bc699aaaa61a69c9ae47af17269207ef6fdf9b6839a146b7 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nbJ 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nbJ 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.nbJ 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1ac04f9c56f15c7083a14de2ae34ba915ab2124880140d948771b0c3d6a65ac1 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DYK 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1ac04f9c56f15c7083a14de2ae34ba915ab2124880140d948771b0c3d6a65ac1 3 00:16:40.028 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1ac04f9c56f15c7083a14de2ae34ba915ab2124880140d948771b0c3d6a65ac1 3 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1ac04f9c56f15c7083a14de2ae34ba915ab2124880140d948771b0c3d6a65ac1 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DYK 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DYK 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.DYK 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bbddbef8ec675215b913a30d16575ead 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.56C 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bbddbef8ec675215b913a30d16575ead 1 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bbddbef8ec675215b913a30d16575ead 1 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bbddbef8ec675215b913a30d16575ead 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.56C 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.56C 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.56C 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a395f883ec34c05dc909e5f1c62347781008fe7668a2cd66 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.SmA 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a395f883ec34c05dc909e5f1c62347781008fe7668a2cd66 2 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a395f883ec34c05dc909e5f1c62347781008fe7668a2cd66 2 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a395f883ec34c05dc909e5f1c62347781008fe7668a2cd66 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.SmA 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.SmA 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.SmA 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:40.029 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d6573b8b6226f7a2d0a10fb6eef3ddb6ae7e82183de60969 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lyW 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d6573b8b6226f7a2d0a10fb6eef3ddb6ae7e82183de60969 2 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d6573b8b6226f7a2d0a10fb6eef3ddb6ae7e82183de60969 2 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d6573b8b6226f7a2d0a10fb6eef3ddb6ae7e82183de60969 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lyW 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lyW 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.lyW 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=23e6e63e5d362ccdbc943ed0979a4620 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.SPN 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 23e6e63e5d362ccdbc943ed0979a4620 1 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 23e6e63e5d362ccdbc943ed0979a4620 1 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=23e6e63e5d362ccdbc943ed0979a4620 00:16:40.291 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.SPN 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.SPN 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.SPN 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=248f775cc80d6fde19ffc415b8312fc8e2bbf0202e0355674f91f998a763e542 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vPW 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 248f775cc80d6fde19ffc415b8312fc8e2bbf0202e0355674f91f998a763e542 3 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 248f775cc80d6fde19ffc415b8312fc8e2bbf0202e0355674f91f998a763e542 3 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=248f775cc80d6fde19ffc415b8312fc8e2bbf0202e0355674f91f998a763e542 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vPW 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vPW 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.vPW 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1253763 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1253763 ']' 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:40.292 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.553 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:40.553 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:40.553 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1254091 /var/tmp/host.sock 00:16:40.553 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1254091 ']' 00:16:40.553 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:16:40.553 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:40.553 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:40.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:40.553 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:40.553 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nbJ 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.nbJ 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.nbJ 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.DYK ]] 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DYK 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DYK 00:16:40.814 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DYK 00:16:41.075 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:41.075 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.56C 00:16:41.075 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.075 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.075 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.075 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.56C 00:16:41.075 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.56C 00:16:41.335 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.SmA ]] 00:16:41.335 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SmA 00:16:41.335 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.335 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.335 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.335 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SmA 00:16:41.335 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SmA 00:16:41.335 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:41.335 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lyW 00:16:41.335 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.335 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.335 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.335 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.lyW 00:16:41.335 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.lyW 00:16:41.596 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.SPN ]] 00:16:41.596 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SPN 00:16:41.596 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.596 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.596 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.596 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SPN 00:16:41.596 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SPN 00:16:41.856 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:41.856 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.vPW 00:16:41.856 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.856 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.856 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.856 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.vPW 00:16:41.856 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.vPW 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.117 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.378 00:16:42.378 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.378 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.378 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.639 { 00:16:42.639 "cntlid": 1, 00:16:42.639 "qid": 0, 00:16:42.639 "state": "enabled", 00:16:42.639 "thread": "nvmf_tgt_poll_group_000", 00:16:42.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:42.639 "listen_address": { 00:16:42.639 "trtype": "TCP", 00:16:42.639 "adrfam": "IPv4", 00:16:42.639 "traddr": "10.0.0.2", 00:16:42.639 "trsvcid": "4420" 00:16:42.639 }, 00:16:42.639 "peer_address": { 00:16:42.639 "trtype": "TCP", 00:16:42.639 "adrfam": "IPv4", 00:16:42.639 "traddr": "10.0.0.1", 00:16:42.639 "trsvcid": "38404" 00:16:42.639 }, 00:16:42.639 "auth": { 00:16:42.639 "state": "completed", 00:16:42.639 "digest": "sha256", 00:16:42.639 "dhgroup": "null" 00:16:42.639 } 00:16:42.639 } 00:16:42.639 ]' 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.639 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.899 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:16:42.900 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.840 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.100 00:16:44.100 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.100 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.100 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.360 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.360 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.361 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.361 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.361 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.361 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.361 { 00:16:44.361 "cntlid": 3, 00:16:44.361 "qid": 0, 00:16:44.361 "state": "enabled", 00:16:44.361 "thread": "nvmf_tgt_poll_group_000", 00:16:44.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:44.361 "listen_address": { 00:16:44.361 "trtype": "TCP", 00:16:44.361 "adrfam": "IPv4", 00:16:44.361 "traddr": "10.0.0.2", 00:16:44.361 "trsvcid": "4420" 00:16:44.361 }, 00:16:44.361 "peer_address": { 00:16:44.361 "trtype": "TCP", 00:16:44.361 "adrfam": "IPv4", 00:16:44.361 "traddr": "10.0.0.1", 00:16:44.361 "trsvcid": "38412" 00:16:44.361 }, 00:16:44.361 "auth": { 00:16:44.361 "state": "completed", 00:16:44.361 "digest": "sha256", 00:16:44.361 "dhgroup": "null" 00:16:44.361 } 00:16:44.361 } 00:16:44.361 ]' 00:16:44.361 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.361 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.361 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.361 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:44.361 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.361 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.361 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.361 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.636 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:16:44.636 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:16:45.577 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.577 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.838 00:16:45.838 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.838 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.838 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.099 { 00:16:46.099 "cntlid": 5, 00:16:46.099 "qid": 0, 00:16:46.099 "state": "enabled", 00:16:46.099 "thread": "nvmf_tgt_poll_group_000", 00:16:46.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:46.099 "listen_address": { 00:16:46.099 "trtype": "TCP", 00:16:46.099 "adrfam": "IPv4", 00:16:46.099 "traddr": "10.0.0.2", 00:16:46.099 "trsvcid": "4420" 00:16:46.099 }, 00:16:46.099 "peer_address": { 00:16:46.099 "trtype": "TCP", 00:16:46.099 "adrfam": "IPv4", 00:16:46.099 "traddr": "10.0.0.1", 00:16:46.099 "trsvcid": "38442" 00:16:46.099 }, 00:16:46.099 "auth": { 00:16:46.099 "state": "completed", 00:16:46.099 "digest": "sha256", 00:16:46.099 "dhgroup": "null" 00:16:46.099 } 00:16:46.099 } 00:16:46.099 ]' 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.099 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.360 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:16:46.361 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:16:46.932 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.932 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:46.932 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.932 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.193 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.454 00:16:47.454 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.454 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.454 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.715 { 00:16:47.715 "cntlid": 7, 00:16:47.715 "qid": 0, 00:16:47.715 "state": "enabled", 00:16:47.715 "thread": "nvmf_tgt_poll_group_000", 00:16:47.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:47.715 "listen_address": { 00:16:47.715 "trtype": "TCP", 00:16:47.715 "adrfam": "IPv4", 00:16:47.715 "traddr": "10.0.0.2", 00:16:47.715 "trsvcid": "4420" 00:16:47.715 }, 00:16:47.715 "peer_address": { 00:16:47.715 "trtype": "TCP", 00:16:47.715 "adrfam": "IPv4", 00:16:47.715 "traddr": "10.0.0.1", 00:16:47.715 "trsvcid": "34444" 00:16:47.715 }, 00:16:47.715 "auth": { 00:16:47.715 "state": "completed", 00:16:47.715 "digest": "sha256", 00:16:47.715 "dhgroup": "null" 00:16:47.715 } 00:16:47.715 } 00:16:47.715 ]' 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.715 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.976 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:16:47.976 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.917 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.178 00:16:49.178 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.178 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.179 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.439 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.439 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.439 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.439 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.439 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.439 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.439 { 00:16:49.439 "cntlid": 9, 00:16:49.439 "qid": 0, 00:16:49.439 "state": "enabled", 00:16:49.439 "thread": "nvmf_tgt_poll_group_000", 00:16:49.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:49.439 "listen_address": { 00:16:49.439 "trtype": "TCP", 00:16:49.439 "adrfam": "IPv4", 00:16:49.439 "traddr": "10.0.0.2", 00:16:49.439 "trsvcid": "4420" 00:16:49.439 }, 00:16:49.439 "peer_address": { 00:16:49.439 "trtype": "TCP", 00:16:49.439 "adrfam": "IPv4", 00:16:49.439 "traddr": "10.0.0.1", 00:16:49.439 "trsvcid": "34482" 00:16:49.439 }, 00:16:49.439 "auth": { 00:16:49.439 "state": "completed", 00:16:49.439 "digest": "sha256", 00:16:49.439 "dhgroup": "ffdhe2048" 00:16:49.439 } 00:16:49.439 } 00:16:49.440 ]' 00:16:49.440 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.440 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.440 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.440 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.440 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.440 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.440 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.440 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.701 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:16:49.701 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.645 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.906 00:16:50.906 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.906 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.906 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.167 { 00:16:51.167 "cntlid": 11, 00:16:51.167 "qid": 0, 00:16:51.167 "state": "enabled", 00:16:51.167 "thread": "nvmf_tgt_poll_group_000", 00:16:51.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:51.167 "listen_address": { 00:16:51.167 "trtype": "TCP", 00:16:51.167 "adrfam": "IPv4", 00:16:51.167 "traddr": "10.0.0.2", 00:16:51.167 "trsvcid": "4420" 00:16:51.167 }, 00:16:51.167 "peer_address": { 00:16:51.167 "trtype": "TCP", 00:16:51.167 "adrfam": "IPv4", 00:16:51.167 "traddr": "10.0.0.1", 00:16:51.167 "trsvcid": "34516" 00:16:51.167 }, 00:16:51.167 "auth": { 00:16:51.167 "state": "completed", 00:16:51.167 "digest": "sha256", 00:16:51.167 "dhgroup": "ffdhe2048" 00:16:51.167 } 00:16:51.167 } 00:16:51.167 ]' 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.167 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.428 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:16:51.428 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:16:52.370 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.370 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.371 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.371 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.371 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.371 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.371 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.631 00:16:52.631 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.631 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.631 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.892 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.892 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.892 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.892 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.892 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.892 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.892 { 00:16:52.892 "cntlid": 13, 00:16:52.892 "qid": 0, 00:16:52.892 "state": "enabled", 00:16:52.892 "thread": "nvmf_tgt_poll_group_000", 00:16:52.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:52.893 "listen_address": { 00:16:52.893 "trtype": "TCP", 00:16:52.893 "adrfam": "IPv4", 00:16:52.893 "traddr": "10.0.0.2", 00:16:52.893 "trsvcid": "4420" 00:16:52.893 }, 00:16:52.893 "peer_address": { 00:16:52.893 "trtype": "TCP", 00:16:52.893 "adrfam": "IPv4", 00:16:52.893 "traddr": "10.0.0.1", 00:16:52.893 "trsvcid": "34540" 00:16:52.893 }, 00:16:52.893 "auth": { 00:16:52.893 "state": "completed", 00:16:52.893 "digest": "sha256", 00:16:52.893 "dhgroup": "ffdhe2048" 00:16:52.893 } 00:16:52.893 } 00:16:52.893 ]' 00:16:52.893 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.893 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.893 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.893 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:52.893 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.893 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.893 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.893 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.153 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:16:53.153 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.094 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.354 00:16:54.354 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.354 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.354 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.354 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.354 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.354 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.354 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.614 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.614 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.614 { 00:16:54.614 "cntlid": 15, 00:16:54.614 "qid": 0, 00:16:54.614 "state": "enabled", 00:16:54.614 "thread": "nvmf_tgt_poll_group_000", 00:16:54.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:54.614 "listen_address": { 00:16:54.614 "trtype": "TCP", 00:16:54.614 "adrfam": "IPv4", 00:16:54.614 "traddr": "10.0.0.2", 00:16:54.614 "trsvcid": "4420" 00:16:54.614 }, 00:16:54.614 "peer_address": { 00:16:54.614 "trtype": "TCP", 00:16:54.614 "adrfam": "IPv4", 00:16:54.614 "traddr": "10.0.0.1", 00:16:54.614 "trsvcid": "34578" 00:16:54.614 }, 00:16:54.614 "auth": { 00:16:54.614 "state": "completed", 00:16:54.614 "digest": "sha256", 00:16:54.614 "dhgroup": "ffdhe2048" 00:16:54.614 } 00:16:54.614 } 00:16:54.614 ]' 00:16:54.614 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.614 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.614 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.614 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.614 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.614 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.614 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.614 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.910 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:16:54.910 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:16:55.525 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.525 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:55.525 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.525 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.525 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.525 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.525 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.525 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.525 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.785 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:55.785 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.786 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.786 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:55.786 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.786 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.786 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.786 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.786 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.786 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.786 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.786 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.786 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.046 00:16:56.046 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.046 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.046 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.307 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.307 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.308 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.308 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.308 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.308 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.308 { 00:16:56.308 "cntlid": 17, 00:16:56.308 "qid": 0, 00:16:56.308 "state": "enabled", 00:16:56.308 "thread": "nvmf_tgt_poll_group_000", 00:16:56.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:56.308 "listen_address": { 00:16:56.308 "trtype": "TCP", 00:16:56.308 "adrfam": "IPv4", 00:16:56.308 "traddr": "10.0.0.2", 00:16:56.308 "trsvcid": "4420" 00:16:56.308 }, 00:16:56.308 "peer_address": { 00:16:56.308 "trtype": "TCP", 00:16:56.308 "adrfam": "IPv4", 00:16:56.308 "traddr": "10.0.0.1", 00:16:56.308 "trsvcid": "34608" 00:16:56.308 }, 00:16:56.308 "auth": { 00:16:56.308 "state": "completed", 00:16:56.308 "digest": "sha256", 00:16:56.308 "dhgroup": "ffdhe3072" 00:16:56.308 } 00:16:56.308 } 00:16:56.308 ]' 00:16:56.308 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.308 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.308 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.308 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:56.308 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.308 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.308 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.308 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.569 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:16:56.569 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:16:57.141 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.401 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:57.401 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.401 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.401 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.401 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.401 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:57.401 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:57.401 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:57.401 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.401 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:57.401 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:57.401 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.401 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.401 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.401 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.401 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.401 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.401 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.401 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.401 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.662 00:16:57.662 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.662 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.662 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.923 { 00:16:57.923 "cntlid": 19, 00:16:57.923 "qid": 0, 00:16:57.923 "state": "enabled", 00:16:57.923 "thread": "nvmf_tgt_poll_group_000", 00:16:57.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:57.923 "listen_address": { 00:16:57.923 "trtype": "TCP", 00:16:57.923 "adrfam": "IPv4", 00:16:57.923 "traddr": "10.0.0.2", 00:16:57.923 "trsvcid": "4420" 00:16:57.923 }, 00:16:57.923 "peer_address": { 00:16:57.923 "trtype": "TCP", 00:16:57.923 "adrfam": "IPv4", 00:16:57.923 "traddr": "10.0.0.1", 00:16:57.923 "trsvcid": "60582" 00:16:57.923 }, 00:16:57.923 "auth": { 00:16:57.923 "state": "completed", 00:16:57.923 "digest": "sha256", 00:16:57.923 "dhgroup": "ffdhe3072" 00:16:57.923 } 00:16:57.923 } 00:16:57.923 ]' 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.923 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.184 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:16:58.184 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.126 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.386 00:16:59.386 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.386 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.386 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.647 { 00:16:59.647 "cntlid": 21, 00:16:59.647 "qid": 0, 00:16:59.647 "state": "enabled", 00:16:59.647 "thread": "nvmf_tgt_poll_group_000", 00:16:59.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:59.647 "listen_address": { 00:16:59.647 "trtype": "TCP", 00:16:59.647 "adrfam": "IPv4", 00:16:59.647 "traddr": "10.0.0.2", 00:16:59.647 "trsvcid": "4420" 00:16:59.647 }, 00:16:59.647 "peer_address": { 00:16:59.647 "trtype": "TCP", 00:16:59.647 "adrfam": "IPv4", 00:16:59.647 "traddr": "10.0.0.1", 00:16:59.647 "trsvcid": "60612" 00:16:59.647 }, 00:16:59.647 "auth": { 00:16:59.647 "state": "completed", 00:16:59.647 "digest": "sha256", 00:16:59.647 "dhgroup": "ffdhe3072" 00:16:59.647 } 00:16:59.647 } 00:16:59.647 ]' 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.647 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.908 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:16:59.908 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.866 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.126 00:17:01.126 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.126 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.126 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.387 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.387 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.387 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.387 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.387 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.387 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.387 { 00:17:01.387 "cntlid": 23, 00:17:01.387 "qid": 0, 00:17:01.387 "state": "enabled", 00:17:01.387 "thread": "nvmf_tgt_poll_group_000", 00:17:01.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:01.387 "listen_address": { 00:17:01.387 "trtype": "TCP", 00:17:01.387 "adrfam": "IPv4", 00:17:01.387 "traddr": "10.0.0.2", 00:17:01.387 "trsvcid": "4420" 00:17:01.387 }, 00:17:01.387 "peer_address": { 00:17:01.387 "trtype": "TCP", 00:17:01.387 "adrfam": "IPv4", 00:17:01.387 "traddr": "10.0.0.1", 00:17:01.387 "trsvcid": "60640" 00:17:01.387 }, 00:17:01.387 "auth": { 00:17:01.387 "state": "completed", 00:17:01.387 "digest": "sha256", 00:17:01.387 "dhgroup": "ffdhe3072" 00:17:01.387 } 00:17:01.387 } 00:17:01.387 ]' 00:17:01.387 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.387 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.387 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.387 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.387 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.387 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.387 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.387 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.649 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:01.649 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.589 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.850 00:17:02.850 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.850 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.850 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.111 { 00:17:03.111 "cntlid": 25, 00:17:03.111 "qid": 0, 00:17:03.111 "state": "enabled", 00:17:03.111 "thread": "nvmf_tgt_poll_group_000", 00:17:03.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:03.111 "listen_address": { 00:17:03.111 "trtype": "TCP", 00:17:03.111 "adrfam": "IPv4", 00:17:03.111 "traddr": "10.0.0.2", 00:17:03.111 "trsvcid": "4420" 00:17:03.111 }, 00:17:03.111 "peer_address": { 00:17:03.111 "trtype": "TCP", 00:17:03.111 "adrfam": "IPv4", 00:17:03.111 "traddr": "10.0.0.1", 00:17:03.111 "trsvcid": "60662" 00:17:03.111 }, 00:17:03.111 "auth": { 00:17:03.111 "state": "completed", 00:17:03.111 "digest": "sha256", 00:17:03.111 "dhgroup": "ffdhe4096" 00:17:03.111 } 00:17:03.111 } 00:17:03.111 ]' 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.111 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.372 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:03.372 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.314 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.314 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.314 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.314 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.314 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.577 00:17:04.577 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.577 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.577 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.837 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.837 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.837 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.837 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.837 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.837 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.837 { 00:17:04.837 "cntlid": 27, 00:17:04.837 "qid": 0, 00:17:04.837 "state": "enabled", 00:17:04.837 "thread": "nvmf_tgt_poll_group_000", 00:17:04.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:04.838 "listen_address": { 00:17:04.838 "trtype": "TCP", 00:17:04.838 "adrfam": "IPv4", 00:17:04.838 "traddr": "10.0.0.2", 00:17:04.838 "trsvcid": "4420" 00:17:04.838 }, 00:17:04.838 "peer_address": { 00:17:04.838 "trtype": "TCP", 00:17:04.838 "adrfam": "IPv4", 00:17:04.838 "traddr": "10.0.0.1", 00:17:04.838 "trsvcid": "60674" 00:17:04.838 }, 00:17:04.838 "auth": { 00:17:04.838 "state": "completed", 00:17:04.838 "digest": "sha256", 00:17:04.838 "dhgroup": "ffdhe4096" 00:17:04.838 } 00:17:04.838 } 00:17:04.838 ]' 00:17:04.838 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.838 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.838 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.838 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:04.838 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.838 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.838 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.838 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.098 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:05.098 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.042 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.304 00:17:06.304 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.304 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.304 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.565 { 00:17:06.565 "cntlid": 29, 00:17:06.565 "qid": 0, 00:17:06.565 "state": "enabled", 00:17:06.565 "thread": "nvmf_tgt_poll_group_000", 00:17:06.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:06.565 "listen_address": { 00:17:06.565 "trtype": "TCP", 00:17:06.565 "adrfam": "IPv4", 00:17:06.565 "traddr": "10.0.0.2", 00:17:06.565 "trsvcid": "4420" 00:17:06.565 }, 00:17:06.565 "peer_address": { 00:17:06.565 "trtype": "TCP", 00:17:06.565 "adrfam": "IPv4", 00:17:06.565 "traddr": "10.0.0.1", 00:17:06.565 "trsvcid": "60702" 00:17:06.565 }, 00:17:06.565 "auth": { 00:17:06.565 "state": "completed", 00:17:06.565 "digest": "sha256", 00:17:06.565 "dhgroup": "ffdhe4096" 00:17:06.565 } 00:17:06.565 } 00:17:06.565 ]' 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.565 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.826 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:06.826 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.767 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.028 00:17:08.028 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.028 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.028 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.290 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.290 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.290 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.290 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.290 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.290 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.290 { 00:17:08.290 "cntlid": 31, 00:17:08.290 "qid": 0, 00:17:08.290 "state": "enabled", 00:17:08.290 "thread": "nvmf_tgt_poll_group_000", 00:17:08.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:08.290 "listen_address": { 00:17:08.290 "trtype": "TCP", 00:17:08.290 "adrfam": "IPv4", 00:17:08.290 "traddr": "10.0.0.2", 00:17:08.290 "trsvcid": "4420" 00:17:08.290 }, 00:17:08.290 "peer_address": { 00:17:08.290 "trtype": "TCP", 00:17:08.290 "adrfam": "IPv4", 00:17:08.290 "traddr": "10.0.0.1", 00:17:08.290 "trsvcid": "49504" 00:17:08.290 }, 00:17:08.290 "auth": { 00:17:08.290 "state": "completed", 00:17:08.290 "digest": "sha256", 00:17:08.290 "dhgroup": "ffdhe4096" 00:17:08.290 } 00:17:08.290 } 00:17:08.290 ]' 00:17:08.290 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.290 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.290 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.290 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:08.290 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.290 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.290 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.290 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.551 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:08.551 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:09.493 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.493 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:09.493 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.493 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.493 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.493 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.493 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.493 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.493 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.493 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:09.493 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.493 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.493 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:09.493 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.494 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.494 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.494 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.494 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.494 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.494 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.494 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.494 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.065 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.065 { 00:17:10.065 "cntlid": 33, 00:17:10.065 "qid": 0, 00:17:10.065 "state": "enabled", 00:17:10.065 "thread": "nvmf_tgt_poll_group_000", 00:17:10.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:10.065 "listen_address": { 00:17:10.065 "trtype": "TCP", 00:17:10.065 "adrfam": "IPv4", 00:17:10.065 "traddr": "10.0.0.2", 00:17:10.065 "trsvcid": "4420" 00:17:10.065 }, 00:17:10.065 "peer_address": { 00:17:10.065 "trtype": "TCP", 00:17:10.065 "adrfam": "IPv4", 00:17:10.065 "traddr": "10.0.0.1", 00:17:10.065 "trsvcid": "49524" 00:17:10.065 }, 00:17:10.065 "auth": { 00:17:10.065 "state": "completed", 00:17:10.065 "digest": "sha256", 00:17:10.065 "dhgroup": "ffdhe6144" 00:17:10.065 } 00:17:10.065 } 00:17:10.065 ]' 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:10.065 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.326 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.326 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.326 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.326 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:10.326 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:11.267 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.267 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:11.267 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.267 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.267 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.267 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.267 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.267 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.267 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:11.267 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.267 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.267 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:11.267 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.267 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.267 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.267 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.267 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.267 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.267 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.267 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.267 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.839 00:17:11.839 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.839 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.839 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.839 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.839 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.839 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.839 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.839 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.839 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.839 { 00:17:11.839 "cntlid": 35, 00:17:11.839 "qid": 0, 00:17:11.839 "state": "enabled", 00:17:11.839 "thread": "nvmf_tgt_poll_group_000", 00:17:11.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:11.839 "listen_address": { 00:17:11.839 "trtype": "TCP", 00:17:11.839 "adrfam": "IPv4", 00:17:11.839 "traddr": "10.0.0.2", 00:17:11.839 "trsvcid": "4420" 00:17:11.840 }, 00:17:11.840 "peer_address": { 00:17:11.840 "trtype": "TCP", 00:17:11.840 "adrfam": "IPv4", 00:17:11.840 "traddr": "10.0.0.1", 00:17:11.840 "trsvcid": "49564" 00:17:11.840 }, 00:17:11.840 "auth": { 00:17:11.840 "state": "completed", 00:17:11.840 "digest": "sha256", 00:17:11.840 "dhgroup": "ffdhe6144" 00:17:11.840 } 00:17:11.840 } 00:17:11.840 ]' 00:17:11.840 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.840 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.840 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.109 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:12.109 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.109 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.109 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.109 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.109 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:12.109 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:13.052 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.052 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:13.052 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.052 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.052 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.052 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.052 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:13.052 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:13.313 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:13.313 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.313 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:13.313 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:13.313 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.313 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.313 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.313 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.313 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.313 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.313 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.313 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.313 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.574 00:17:13.574 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.574 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.574 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.835 { 00:17:13.835 "cntlid": 37, 00:17:13.835 "qid": 0, 00:17:13.835 "state": "enabled", 00:17:13.835 "thread": "nvmf_tgt_poll_group_000", 00:17:13.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:13.835 "listen_address": { 00:17:13.835 "trtype": "TCP", 00:17:13.835 "adrfam": "IPv4", 00:17:13.835 "traddr": "10.0.0.2", 00:17:13.835 "trsvcid": "4420" 00:17:13.835 }, 00:17:13.835 "peer_address": { 00:17:13.835 "trtype": "TCP", 00:17:13.835 "adrfam": "IPv4", 00:17:13.835 "traddr": "10.0.0.1", 00:17:13.835 "trsvcid": "49594" 00:17:13.835 }, 00:17:13.835 "auth": { 00:17:13.835 "state": "completed", 00:17:13.835 "digest": "sha256", 00:17:13.835 "dhgroup": "ffdhe6144" 00:17:13.835 } 00:17:13.835 } 00:17:13.835 ]' 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.835 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.096 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:14.096 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.038 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.299 00:17:15.299 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.299 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.299 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.560 { 00:17:15.560 "cntlid": 39, 00:17:15.560 "qid": 0, 00:17:15.560 "state": "enabled", 00:17:15.560 "thread": "nvmf_tgt_poll_group_000", 00:17:15.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:15.560 "listen_address": { 00:17:15.560 "trtype": "TCP", 00:17:15.560 "adrfam": "IPv4", 00:17:15.560 "traddr": "10.0.0.2", 00:17:15.560 "trsvcid": "4420" 00:17:15.560 }, 00:17:15.560 "peer_address": { 00:17:15.560 "trtype": "TCP", 00:17:15.560 "adrfam": "IPv4", 00:17:15.560 "traddr": "10.0.0.1", 00:17:15.560 "trsvcid": "49620" 00:17:15.560 }, 00:17:15.560 "auth": { 00:17:15.560 "state": "completed", 00:17:15.560 "digest": "sha256", 00:17:15.560 "dhgroup": "ffdhe6144" 00:17:15.560 } 00:17:15.560 } 00:17:15.560 ]' 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.560 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.821 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:15.821 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.764 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.336 00:17:17.336 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.336 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.336 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.598 { 00:17:17.598 "cntlid": 41, 00:17:17.598 "qid": 0, 00:17:17.598 "state": "enabled", 00:17:17.598 "thread": "nvmf_tgt_poll_group_000", 00:17:17.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:17.598 "listen_address": { 00:17:17.598 "trtype": "TCP", 00:17:17.598 "adrfam": "IPv4", 00:17:17.598 "traddr": "10.0.0.2", 00:17:17.598 "trsvcid": "4420" 00:17:17.598 }, 00:17:17.598 "peer_address": { 00:17:17.598 "trtype": "TCP", 00:17:17.598 "adrfam": "IPv4", 00:17:17.598 "traddr": "10.0.0.1", 00:17:17.598 "trsvcid": "49648" 00:17:17.598 }, 00:17:17.598 "auth": { 00:17:17.598 "state": "completed", 00:17:17.598 "digest": "sha256", 00:17:17.598 "dhgroup": "ffdhe8192" 00:17:17.598 } 00:17:17.598 } 00:17:17.598 ]' 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.598 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.858 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:17.858 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:18.428 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.428 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:18.428 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.428 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.688 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.260 00:17:19.260 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.260 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.260 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.520 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.521 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.521 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.521 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.521 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.521 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.521 { 00:17:19.521 "cntlid": 43, 00:17:19.521 "qid": 0, 00:17:19.521 "state": "enabled", 00:17:19.521 "thread": "nvmf_tgt_poll_group_000", 00:17:19.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:19.521 "listen_address": { 00:17:19.521 "trtype": "TCP", 00:17:19.521 "adrfam": "IPv4", 00:17:19.521 "traddr": "10.0.0.2", 00:17:19.521 "trsvcid": "4420" 00:17:19.521 }, 00:17:19.521 "peer_address": { 00:17:19.521 "trtype": "TCP", 00:17:19.521 "adrfam": "IPv4", 00:17:19.521 "traddr": "10.0.0.1", 00:17:19.521 "trsvcid": "59512" 00:17:19.521 }, 00:17:19.521 "auth": { 00:17:19.521 "state": "completed", 00:17:19.521 "digest": "sha256", 00:17:19.521 "dhgroup": "ffdhe8192" 00:17:19.521 } 00:17:19.521 } 00:17:19.521 ]' 00:17:19.521 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.521 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.521 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.521 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.521 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.521 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.521 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.521 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.782 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:19.782 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.725 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.297 00:17:21.297 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.297 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.297 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.297 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.558 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.558 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.558 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.558 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.558 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.558 { 00:17:21.558 "cntlid": 45, 00:17:21.558 "qid": 0, 00:17:21.558 "state": "enabled", 00:17:21.558 "thread": "nvmf_tgt_poll_group_000", 00:17:21.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:21.558 "listen_address": { 00:17:21.558 "trtype": "TCP", 00:17:21.558 "adrfam": "IPv4", 00:17:21.558 "traddr": "10.0.0.2", 00:17:21.558 "trsvcid": "4420" 00:17:21.558 }, 00:17:21.558 "peer_address": { 00:17:21.558 "trtype": "TCP", 00:17:21.558 "adrfam": "IPv4", 00:17:21.558 "traddr": "10.0.0.1", 00:17:21.558 "trsvcid": "59548" 00:17:21.558 }, 00:17:21.558 "auth": { 00:17:21.558 "state": "completed", 00:17:21.558 "digest": "sha256", 00:17:21.558 "dhgroup": "ffdhe8192" 00:17:21.558 } 00:17:21.558 } 00:17:21.558 ]' 00:17:21.558 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.558 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.558 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.558 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.558 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.558 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.558 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.558 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.819 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:21.819 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:22.391 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.652 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.223 00:17:23.223 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.223 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.223 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.484 { 00:17:23.484 "cntlid": 47, 00:17:23.484 "qid": 0, 00:17:23.484 "state": "enabled", 00:17:23.484 "thread": "nvmf_tgt_poll_group_000", 00:17:23.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:23.484 "listen_address": { 00:17:23.484 "trtype": "TCP", 00:17:23.484 "adrfam": "IPv4", 00:17:23.484 "traddr": "10.0.0.2", 00:17:23.484 "trsvcid": "4420" 00:17:23.484 }, 00:17:23.484 "peer_address": { 00:17:23.484 "trtype": "TCP", 00:17:23.484 "adrfam": "IPv4", 00:17:23.484 "traddr": "10.0.0.1", 00:17:23.484 "trsvcid": "59560" 00:17:23.484 }, 00:17:23.484 "auth": { 00:17:23.484 "state": "completed", 00:17:23.484 "digest": "sha256", 00:17:23.484 "dhgroup": "ffdhe8192" 00:17:23.484 } 00:17:23.484 } 00:17:23.484 ]' 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.484 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.745 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:23.745 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.686 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.947 00:17:24.947 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.947 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.947 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.208 { 00:17:25.208 "cntlid": 49, 00:17:25.208 "qid": 0, 00:17:25.208 "state": "enabled", 00:17:25.208 "thread": "nvmf_tgt_poll_group_000", 00:17:25.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:25.208 "listen_address": { 00:17:25.208 "trtype": "TCP", 00:17:25.208 "adrfam": "IPv4", 00:17:25.208 "traddr": "10.0.0.2", 00:17:25.208 "trsvcid": "4420" 00:17:25.208 }, 00:17:25.208 "peer_address": { 00:17:25.208 "trtype": "TCP", 00:17:25.208 "adrfam": "IPv4", 00:17:25.208 "traddr": "10.0.0.1", 00:17:25.208 "trsvcid": "59578" 00:17:25.208 }, 00:17:25.208 "auth": { 00:17:25.208 "state": "completed", 00:17:25.208 "digest": "sha384", 00:17:25.208 "dhgroup": "null" 00:17:25.208 } 00:17:25.208 } 00:17:25.208 ]' 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.208 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.469 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:25.469 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:26.048 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.313 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:26.313 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.313 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.313 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.313 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.313 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:26.313 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:26.313 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:26.313 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.313 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.313 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:26.313 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.313 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.313 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.313 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.313 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.313 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.313 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.313 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.313 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.574 00:17:26.574 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.574 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.574 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.835 { 00:17:26.835 "cntlid": 51, 00:17:26.835 "qid": 0, 00:17:26.835 "state": "enabled", 00:17:26.835 "thread": "nvmf_tgt_poll_group_000", 00:17:26.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:26.835 "listen_address": { 00:17:26.835 "trtype": "TCP", 00:17:26.835 "adrfam": "IPv4", 00:17:26.835 "traddr": "10.0.0.2", 00:17:26.835 "trsvcid": "4420" 00:17:26.835 }, 00:17:26.835 "peer_address": { 00:17:26.835 "trtype": "TCP", 00:17:26.835 "adrfam": "IPv4", 00:17:26.835 "traddr": "10.0.0.1", 00:17:26.835 "trsvcid": "59608" 00:17:26.835 }, 00:17:26.835 "auth": { 00:17:26.835 "state": "completed", 00:17:26.835 "digest": "sha384", 00:17:26.835 "dhgroup": "null" 00:17:26.835 } 00:17:26.835 } 00:17:26.835 ]' 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.835 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.096 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:27.096 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.037 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.296 00:17:28.296 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.296 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.297 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.557 { 00:17:28.557 "cntlid": 53, 00:17:28.557 "qid": 0, 00:17:28.557 "state": "enabled", 00:17:28.557 "thread": "nvmf_tgt_poll_group_000", 00:17:28.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:28.557 "listen_address": { 00:17:28.557 "trtype": "TCP", 00:17:28.557 "adrfam": "IPv4", 00:17:28.557 "traddr": "10.0.0.2", 00:17:28.557 "trsvcid": "4420" 00:17:28.557 }, 00:17:28.557 "peer_address": { 00:17:28.557 "trtype": "TCP", 00:17:28.557 "adrfam": "IPv4", 00:17:28.557 "traddr": "10.0.0.1", 00:17:28.557 "trsvcid": "48506" 00:17:28.557 }, 00:17:28.557 "auth": { 00:17:28.557 "state": "completed", 00:17:28.557 "digest": "sha384", 00:17:28.557 "dhgroup": "null" 00:17:28.557 } 00:17:28.557 } 00:17:28.557 ]' 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.557 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.817 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:28.817 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:29.759 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.760 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.020 00:17:30.020 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.020 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.020 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.020 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.020 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.020 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.020 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.280 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.280 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.280 { 00:17:30.280 "cntlid": 55, 00:17:30.280 "qid": 0, 00:17:30.280 "state": "enabled", 00:17:30.280 "thread": "nvmf_tgt_poll_group_000", 00:17:30.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:30.280 "listen_address": { 00:17:30.280 "trtype": "TCP", 00:17:30.280 "adrfam": "IPv4", 00:17:30.280 "traddr": "10.0.0.2", 00:17:30.280 "trsvcid": "4420" 00:17:30.280 }, 00:17:30.280 "peer_address": { 00:17:30.280 "trtype": "TCP", 00:17:30.280 "adrfam": "IPv4", 00:17:30.280 "traddr": "10.0.0.1", 00:17:30.280 "trsvcid": "48542" 00:17:30.280 }, 00:17:30.280 "auth": { 00:17:30.280 "state": "completed", 00:17:30.280 "digest": "sha384", 00:17:30.280 "dhgroup": "null" 00:17:30.280 } 00:17:30.280 } 00:17:30.280 ]' 00:17:30.280 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.280 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.280 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.280 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:30.280 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.280 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.280 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.280 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.541 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:30.541 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:31.111 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.111 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:31.111 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.111 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.371 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.371 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.371 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.371 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.371 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.371 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:31.371 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.371 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.371 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:31.371 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.371 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.371 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.371 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.371 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.371 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.371 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.371 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.371 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.631 00:17:31.632 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.632 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.632 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.893 { 00:17:31.893 "cntlid": 57, 00:17:31.893 "qid": 0, 00:17:31.893 "state": "enabled", 00:17:31.893 "thread": "nvmf_tgt_poll_group_000", 00:17:31.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:31.893 "listen_address": { 00:17:31.893 "trtype": "TCP", 00:17:31.893 "adrfam": "IPv4", 00:17:31.893 "traddr": "10.0.0.2", 00:17:31.893 "trsvcid": "4420" 00:17:31.893 }, 00:17:31.893 "peer_address": { 00:17:31.893 "trtype": "TCP", 00:17:31.893 "adrfam": "IPv4", 00:17:31.893 "traddr": "10.0.0.1", 00:17:31.893 "trsvcid": "48562" 00:17:31.893 }, 00:17:31.893 "auth": { 00:17:31.893 "state": "completed", 00:17:31.893 "digest": "sha384", 00:17:31.893 "dhgroup": "ffdhe2048" 00:17:31.893 } 00:17:31.893 } 00:17:31.893 ]' 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.893 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.154 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:32.154 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.095 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.355 00:17:33.355 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.355 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.355 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.616 { 00:17:33.616 "cntlid": 59, 00:17:33.616 "qid": 0, 00:17:33.616 "state": "enabled", 00:17:33.616 "thread": "nvmf_tgt_poll_group_000", 00:17:33.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:33.616 "listen_address": { 00:17:33.616 "trtype": "TCP", 00:17:33.616 "adrfam": "IPv4", 00:17:33.616 "traddr": "10.0.0.2", 00:17:33.616 "trsvcid": "4420" 00:17:33.616 }, 00:17:33.616 "peer_address": { 00:17:33.616 "trtype": "TCP", 00:17:33.616 "adrfam": "IPv4", 00:17:33.616 "traddr": "10.0.0.1", 00:17:33.616 "trsvcid": "48588" 00:17:33.616 }, 00:17:33.616 "auth": { 00:17:33.616 "state": "completed", 00:17:33.616 "digest": "sha384", 00:17:33.616 "dhgroup": "ffdhe2048" 00:17:33.616 } 00:17:33.616 } 00:17:33.616 ]' 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.616 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.876 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:33.877 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.817 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.077 00:17:35.077 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.077 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.077 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.342 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.342 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.342 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.342 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.343 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.343 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.343 { 00:17:35.343 "cntlid": 61, 00:17:35.343 "qid": 0, 00:17:35.343 "state": "enabled", 00:17:35.343 "thread": "nvmf_tgt_poll_group_000", 00:17:35.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:35.343 "listen_address": { 00:17:35.343 "trtype": "TCP", 00:17:35.343 "adrfam": "IPv4", 00:17:35.343 "traddr": "10.0.0.2", 00:17:35.343 "trsvcid": "4420" 00:17:35.343 }, 00:17:35.343 "peer_address": { 00:17:35.343 "trtype": "TCP", 00:17:35.343 "adrfam": "IPv4", 00:17:35.343 "traddr": "10.0.0.1", 00:17:35.343 "trsvcid": "48622" 00:17:35.343 }, 00:17:35.343 "auth": { 00:17:35.343 "state": "completed", 00:17:35.343 "digest": "sha384", 00:17:35.343 "dhgroup": "ffdhe2048" 00:17:35.343 } 00:17:35.343 } 00:17:35.343 ]' 00:17:35.343 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.343 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.343 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.343 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:35.343 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.343 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.343 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.343 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.657 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:35.657 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:36.248 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.248 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.248 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.248 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.249 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.249 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.249 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.249 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:36.510 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:36.510 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.510 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.510 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:36.510 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.510 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.510 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:36.510 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.510 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.510 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.510 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.510 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.510 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.771 00:17:36.771 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.771 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.771 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.031 { 00:17:37.031 "cntlid": 63, 00:17:37.031 "qid": 0, 00:17:37.031 "state": "enabled", 00:17:37.031 "thread": "nvmf_tgt_poll_group_000", 00:17:37.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:37.031 "listen_address": { 00:17:37.031 "trtype": "TCP", 00:17:37.031 "adrfam": "IPv4", 00:17:37.031 "traddr": "10.0.0.2", 00:17:37.031 "trsvcid": "4420" 00:17:37.031 }, 00:17:37.031 "peer_address": { 00:17:37.031 "trtype": "TCP", 00:17:37.031 "adrfam": "IPv4", 00:17:37.031 "traddr": "10.0.0.1", 00:17:37.031 "trsvcid": "48646" 00:17:37.031 }, 00:17:37.031 "auth": { 00:17:37.031 "state": "completed", 00:17:37.031 "digest": "sha384", 00:17:37.031 "dhgroup": "ffdhe2048" 00:17:37.031 } 00:17:37.031 } 00:17:37.031 ]' 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.031 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.291 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:37.292 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:37.861 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.861 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:37.861 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.861 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.861 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.122 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.382 00:17:38.382 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.382 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.382 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.644 { 00:17:38.644 "cntlid": 65, 00:17:38.644 "qid": 0, 00:17:38.644 "state": "enabled", 00:17:38.644 "thread": "nvmf_tgt_poll_group_000", 00:17:38.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:38.644 "listen_address": { 00:17:38.644 "trtype": "TCP", 00:17:38.644 "adrfam": "IPv4", 00:17:38.644 "traddr": "10.0.0.2", 00:17:38.644 "trsvcid": "4420" 00:17:38.644 }, 00:17:38.644 "peer_address": { 00:17:38.644 "trtype": "TCP", 00:17:38.644 "adrfam": "IPv4", 00:17:38.644 "traddr": "10.0.0.1", 00:17:38.644 "trsvcid": "48706" 00:17:38.644 }, 00:17:38.644 "auth": { 00:17:38.644 "state": "completed", 00:17:38.644 "digest": "sha384", 00:17:38.644 "dhgroup": "ffdhe3072" 00:17:38.644 } 00:17:38.644 } 00:17:38.644 ]' 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.644 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.904 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:38.905 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.846 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.106 00:17:40.106 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.106 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.106 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.366 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.366 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.366 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.366 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.366 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.366 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.366 { 00:17:40.366 "cntlid": 67, 00:17:40.366 "qid": 0, 00:17:40.366 "state": "enabled", 00:17:40.366 "thread": "nvmf_tgt_poll_group_000", 00:17:40.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:40.366 "listen_address": { 00:17:40.366 "trtype": "TCP", 00:17:40.366 "adrfam": "IPv4", 00:17:40.366 "traddr": "10.0.0.2", 00:17:40.366 "trsvcid": "4420" 00:17:40.366 }, 00:17:40.366 "peer_address": { 00:17:40.366 "trtype": "TCP", 00:17:40.366 "adrfam": "IPv4", 00:17:40.366 "traddr": "10.0.0.1", 00:17:40.366 "trsvcid": "48736" 00:17:40.366 }, 00:17:40.366 "auth": { 00:17:40.366 "state": "completed", 00:17:40.366 "digest": "sha384", 00:17:40.366 "dhgroup": "ffdhe3072" 00:17:40.366 } 00:17:40.366 } 00:17:40.366 ]' 00:17:40.366 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.366 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.366 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.366 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:40.366 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.366 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.366 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.366 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.626 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:40.626 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:41.566 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.567 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.827 00:17:41.827 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.827 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.827 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.088 { 00:17:42.088 "cntlid": 69, 00:17:42.088 "qid": 0, 00:17:42.088 "state": "enabled", 00:17:42.088 "thread": "nvmf_tgt_poll_group_000", 00:17:42.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:42.088 "listen_address": { 00:17:42.088 "trtype": "TCP", 00:17:42.088 "adrfam": "IPv4", 00:17:42.088 "traddr": "10.0.0.2", 00:17:42.088 "trsvcid": "4420" 00:17:42.088 }, 00:17:42.088 "peer_address": { 00:17:42.088 "trtype": "TCP", 00:17:42.088 "adrfam": "IPv4", 00:17:42.088 "traddr": "10.0.0.1", 00:17:42.088 "trsvcid": "48768" 00:17:42.088 }, 00:17:42.088 "auth": { 00:17:42.088 "state": "completed", 00:17:42.088 "digest": "sha384", 00:17:42.088 "dhgroup": "ffdhe3072" 00:17:42.088 } 00:17:42.088 } 00:17:42.088 ]' 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.088 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.347 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:42.347 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:42.918 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.918 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.918 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.918 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.918 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.179 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.440 00:17:43.440 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.440 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.440 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.700 { 00:17:43.700 "cntlid": 71, 00:17:43.700 "qid": 0, 00:17:43.700 "state": "enabled", 00:17:43.700 "thread": "nvmf_tgt_poll_group_000", 00:17:43.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:43.700 "listen_address": { 00:17:43.700 "trtype": "TCP", 00:17:43.700 "adrfam": "IPv4", 00:17:43.700 "traddr": "10.0.0.2", 00:17:43.700 "trsvcid": "4420" 00:17:43.700 }, 00:17:43.700 "peer_address": { 00:17:43.700 "trtype": "TCP", 00:17:43.700 "adrfam": "IPv4", 00:17:43.700 "traddr": "10.0.0.1", 00:17:43.700 "trsvcid": "48798" 00:17:43.700 }, 00:17:43.700 "auth": { 00:17:43.700 "state": "completed", 00:17:43.700 "digest": "sha384", 00:17:43.700 "dhgroup": "ffdhe3072" 00:17:43.700 } 00:17:43.700 } 00:17:43.700 ]' 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.700 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.961 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:43.961 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:44.533 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.533 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:44.533 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.533 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.533 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.533 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.533 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.533 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:44.533 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:44.793 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:44.793 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.793 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:44.793 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:44.793 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:44.793 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.793 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.793 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.793 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.793 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.793 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.793 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.793 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.054 00:17:45.054 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.054 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.054 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.315 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.315 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.315 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.315 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.315 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.315 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.315 { 00:17:45.315 "cntlid": 73, 00:17:45.315 "qid": 0, 00:17:45.315 "state": "enabled", 00:17:45.315 "thread": "nvmf_tgt_poll_group_000", 00:17:45.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:45.315 "listen_address": { 00:17:45.315 "trtype": "TCP", 00:17:45.315 "adrfam": "IPv4", 00:17:45.315 "traddr": "10.0.0.2", 00:17:45.315 "trsvcid": "4420" 00:17:45.315 }, 00:17:45.315 "peer_address": { 00:17:45.315 "trtype": "TCP", 00:17:45.315 "adrfam": "IPv4", 00:17:45.315 "traddr": "10.0.0.1", 00:17:45.315 "trsvcid": "48824" 00:17:45.315 }, 00:17:45.315 "auth": { 00:17:45.315 "state": "completed", 00:17:45.315 "digest": "sha384", 00:17:45.315 "dhgroup": "ffdhe4096" 00:17:45.315 } 00:17:45.315 } 00:17:45.315 ]' 00:17:45.315 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.315 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.315 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.315 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:45.315 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.315 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.315 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.315 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.575 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:45.575 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:46.517 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.517 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:46.517 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.517 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.517 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.778 00:17:46.778 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.778 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.778 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.041 { 00:17:47.041 "cntlid": 75, 00:17:47.041 "qid": 0, 00:17:47.041 "state": "enabled", 00:17:47.041 "thread": "nvmf_tgt_poll_group_000", 00:17:47.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:47.041 "listen_address": { 00:17:47.041 "trtype": "TCP", 00:17:47.041 "adrfam": "IPv4", 00:17:47.041 "traddr": "10.0.0.2", 00:17:47.041 "trsvcid": "4420" 00:17:47.041 }, 00:17:47.041 "peer_address": { 00:17:47.041 "trtype": "TCP", 00:17:47.041 "adrfam": "IPv4", 00:17:47.041 "traddr": "10.0.0.1", 00:17:47.041 "trsvcid": "48862" 00:17:47.041 }, 00:17:47.041 "auth": { 00:17:47.041 "state": "completed", 00:17:47.041 "digest": "sha384", 00:17:47.041 "dhgroup": "ffdhe4096" 00:17:47.041 } 00:17:47.041 } 00:17:47.041 ]' 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.041 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.302 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:47.302 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.244 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.504 00:17:48.504 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.504 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.504 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.766 { 00:17:48.766 "cntlid": 77, 00:17:48.766 "qid": 0, 00:17:48.766 "state": "enabled", 00:17:48.766 "thread": "nvmf_tgt_poll_group_000", 00:17:48.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:48.766 "listen_address": { 00:17:48.766 "trtype": "TCP", 00:17:48.766 "adrfam": "IPv4", 00:17:48.766 "traddr": "10.0.0.2", 00:17:48.766 "trsvcid": "4420" 00:17:48.766 }, 00:17:48.766 "peer_address": { 00:17:48.766 "trtype": "TCP", 00:17:48.766 "adrfam": "IPv4", 00:17:48.766 "traddr": "10.0.0.1", 00:17:48.766 "trsvcid": "60054" 00:17:48.766 }, 00:17:48.766 "auth": { 00:17:48.766 "state": "completed", 00:17:48.766 "digest": "sha384", 00:17:48.766 "dhgroup": "ffdhe4096" 00:17:48.766 } 00:17:48.766 } 00:17:48.766 ]' 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.766 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.026 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:49.026 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.967 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.228 00:17:50.228 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.228 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.228 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.488 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.488 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.488 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.488 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.488 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.488 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.488 { 00:17:50.488 "cntlid": 79, 00:17:50.488 "qid": 0, 00:17:50.488 "state": "enabled", 00:17:50.488 "thread": "nvmf_tgt_poll_group_000", 00:17:50.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:50.488 "listen_address": { 00:17:50.488 "trtype": "TCP", 00:17:50.488 "adrfam": "IPv4", 00:17:50.488 "traddr": "10.0.0.2", 00:17:50.488 "trsvcid": "4420" 00:17:50.488 }, 00:17:50.488 "peer_address": { 00:17:50.488 "trtype": "TCP", 00:17:50.488 "adrfam": "IPv4", 00:17:50.488 "traddr": "10.0.0.1", 00:17:50.488 "trsvcid": "60076" 00:17:50.488 }, 00:17:50.488 "auth": { 00:17:50.488 "state": "completed", 00:17:50.488 "digest": "sha384", 00:17:50.489 "dhgroup": "ffdhe4096" 00:17:50.489 } 00:17:50.489 } 00:17:50.489 ]' 00:17:50.489 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.489 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.489 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.489 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.489 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.489 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.489 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.489 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.749 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:50.749 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.691 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.262 00:17:52.262 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.262 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.262 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.262 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.263 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.263 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.263 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.263 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.263 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.263 { 00:17:52.263 "cntlid": 81, 00:17:52.263 "qid": 0, 00:17:52.263 "state": "enabled", 00:17:52.263 "thread": "nvmf_tgt_poll_group_000", 00:17:52.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:52.263 "listen_address": { 00:17:52.263 "trtype": "TCP", 00:17:52.263 "adrfam": "IPv4", 00:17:52.263 "traddr": "10.0.0.2", 00:17:52.263 "trsvcid": "4420" 00:17:52.263 }, 00:17:52.263 "peer_address": { 00:17:52.263 "trtype": "TCP", 00:17:52.263 "adrfam": "IPv4", 00:17:52.263 "traddr": "10.0.0.1", 00:17:52.263 "trsvcid": "60116" 00:17:52.263 }, 00:17:52.263 "auth": { 00:17:52.263 "state": "completed", 00:17:52.263 "digest": "sha384", 00:17:52.263 "dhgroup": "ffdhe6144" 00:17:52.263 } 00:17:52.263 } 00:17:52.263 ]' 00:17:52.263 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.263 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.263 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.523 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:52.523 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.523 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.523 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.523 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.523 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:52.523 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:17:53.465 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.465 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:53.465 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.465 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.465 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.465 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.465 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:53.465 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:53.465 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:53.465 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.465 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:53.465 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:53.465 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.465 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.465 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.465 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.465 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.465 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.465 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.465 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.465 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.037 00:17:54.037 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.037 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.037 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.037 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.037 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.037 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.037 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.037 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.037 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.037 { 00:17:54.037 "cntlid": 83, 00:17:54.037 "qid": 0, 00:17:54.037 "state": "enabled", 00:17:54.037 "thread": "nvmf_tgt_poll_group_000", 00:17:54.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:54.037 "listen_address": { 00:17:54.037 "trtype": "TCP", 00:17:54.037 "adrfam": "IPv4", 00:17:54.037 "traddr": "10.0.0.2", 00:17:54.037 "trsvcid": "4420" 00:17:54.037 }, 00:17:54.037 "peer_address": { 00:17:54.037 "trtype": "TCP", 00:17:54.037 "adrfam": "IPv4", 00:17:54.037 "traddr": "10.0.0.1", 00:17:54.037 "trsvcid": "60134" 00:17:54.037 }, 00:17:54.037 "auth": { 00:17:54.037 "state": "completed", 00:17:54.037 "digest": "sha384", 00:17:54.037 "dhgroup": "ffdhe6144" 00:17:54.037 } 00:17:54.037 } 00:17:54.037 ]' 00:17:54.037 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.037 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.037 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.297 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:54.297 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.297 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.297 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.297 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.297 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:54.297 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.252 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.825 00:17:55.826 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.826 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.826 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.826 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.826 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.826 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.826 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.826 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.826 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.826 { 00:17:55.826 "cntlid": 85, 00:17:55.826 "qid": 0, 00:17:55.826 "state": "enabled", 00:17:55.826 "thread": "nvmf_tgt_poll_group_000", 00:17:55.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:55.826 "listen_address": { 00:17:55.826 "trtype": "TCP", 00:17:55.826 "adrfam": "IPv4", 00:17:55.826 "traddr": "10.0.0.2", 00:17:55.826 "trsvcid": "4420" 00:17:55.826 }, 00:17:55.826 "peer_address": { 00:17:55.826 "trtype": "TCP", 00:17:55.826 "adrfam": "IPv4", 00:17:55.826 "traddr": "10.0.0.1", 00:17:55.826 "trsvcid": "60176" 00:17:55.826 }, 00:17:55.826 "auth": { 00:17:55.826 "state": "completed", 00:17:55.826 "digest": "sha384", 00:17:55.826 "dhgroup": "ffdhe6144" 00:17:55.826 } 00:17:55.826 } 00:17:55.826 ]' 00:17:55.826 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.826 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.826 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.087 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:56.087 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.087 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.087 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.087 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.347 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:56.347 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:17:56.918 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.918 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:56.918 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.918 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.918 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.918 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.918 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:56.918 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:57.180 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:57.180 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.180 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:57.180 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:57.180 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:57.180 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.180 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:57.180 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.180 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.180 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.180 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:57.180 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.180 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.441 00:17:57.441 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.441 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.441 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.702 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.702 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.702 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.702 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.702 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.702 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.703 { 00:17:57.703 "cntlid": 87, 00:17:57.703 "qid": 0, 00:17:57.703 "state": "enabled", 00:17:57.703 "thread": "nvmf_tgt_poll_group_000", 00:17:57.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:57.703 "listen_address": { 00:17:57.703 "trtype": "TCP", 00:17:57.703 "adrfam": "IPv4", 00:17:57.703 "traddr": "10.0.0.2", 00:17:57.703 "trsvcid": "4420" 00:17:57.703 }, 00:17:57.703 "peer_address": { 00:17:57.703 "trtype": "TCP", 00:17:57.703 "adrfam": "IPv4", 00:17:57.703 "traddr": "10.0.0.1", 00:17:57.703 "trsvcid": "56458" 00:17:57.703 }, 00:17:57.703 "auth": { 00:17:57.703 "state": "completed", 00:17:57.703 "digest": "sha384", 00:17:57.703 "dhgroup": "ffdhe6144" 00:17:57.703 } 00:17:57.703 } 00:17:57.703 ]' 00:17:57.703 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.703 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.703 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.703 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:57.703 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.703 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.703 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.703 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.963 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:57.963 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.906 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.478 00:17:59.478 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.478 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.478 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.739 { 00:17:59.739 "cntlid": 89, 00:17:59.739 "qid": 0, 00:17:59.739 "state": "enabled", 00:17:59.739 "thread": "nvmf_tgt_poll_group_000", 00:17:59.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:59.739 "listen_address": { 00:17:59.739 "trtype": "TCP", 00:17:59.739 "adrfam": "IPv4", 00:17:59.739 "traddr": "10.0.0.2", 00:17:59.739 "trsvcid": "4420" 00:17:59.739 }, 00:17:59.739 "peer_address": { 00:17:59.739 "trtype": "TCP", 00:17:59.739 "adrfam": "IPv4", 00:17:59.739 "traddr": "10.0.0.1", 00:17:59.739 "trsvcid": "56484" 00:17:59.739 }, 00:17:59.739 "auth": { 00:17:59.739 "state": "completed", 00:17:59.739 "digest": "sha384", 00:17:59.739 "dhgroup": "ffdhe8192" 00:17:59.739 } 00:17:59.739 } 00:17:59.739 ]' 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.739 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.000 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:00.000 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.938 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.507 00:18:01.507 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.507 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.507 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.766 { 00:18:01.766 "cntlid": 91, 00:18:01.766 "qid": 0, 00:18:01.766 "state": "enabled", 00:18:01.766 "thread": "nvmf_tgt_poll_group_000", 00:18:01.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:01.766 "listen_address": { 00:18:01.766 "trtype": "TCP", 00:18:01.766 "adrfam": "IPv4", 00:18:01.766 "traddr": "10.0.0.2", 00:18:01.766 "trsvcid": "4420" 00:18:01.766 }, 00:18:01.766 "peer_address": { 00:18:01.766 "trtype": "TCP", 00:18:01.766 "adrfam": "IPv4", 00:18:01.766 "traddr": "10.0.0.1", 00:18:01.766 "trsvcid": "56506" 00:18:01.766 }, 00:18:01.766 "auth": { 00:18:01.766 "state": "completed", 00:18:01.766 "digest": "sha384", 00:18:01.766 "dhgroup": "ffdhe8192" 00:18:01.766 } 00:18:01.766 } 00:18:01.766 ]' 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.766 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:02.025 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:02.593 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.593 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:02.593 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.593 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.854 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.425 00:18:03.425 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.425 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.425 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.686 { 00:18:03.686 "cntlid": 93, 00:18:03.686 "qid": 0, 00:18:03.686 "state": "enabled", 00:18:03.686 "thread": "nvmf_tgt_poll_group_000", 00:18:03.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:03.686 "listen_address": { 00:18:03.686 "trtype": "TCP", 00:18:03.686 "adrfam": "IPv4", 00:18:03.686 "traddr": "10.0.0.2", 00:18:03.686 "trsvcid": "4420" 00:18:03.686 }, 00:18:03.686 "peer_address": { 00:18:03.686 "trtype": "TCP", 00:18:03.686 "adrfam": "IPv4", 00:18:03.686 "traddr": "10.0.0.1", 00:18:03.686 "trsvcid": "56538" 00:18:03.686 }, 00:18:03.686 "auth": { 00:18:03.686 "state": "completed", 00:18:03.686 "digest": "sha384", 00:18:03.686 "dhgroup": "ffdhe8192" 00:18:03.686 } 00:18:03.686 } 00:18:03.686 ]' 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.686 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.946 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:03.946 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.887 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.458 00:18:05.458 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.458 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.459 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.719 { 00:18:05.719 "cntlid": 95, 00:18:05.719 "qid": 0, 00:18:05.719 "state": "enabled", 00:18:05.719 "thread": "nvmf_tgt_poll_group_000", 00:18:05.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:05.719 "listen_address": { 00:18:05.719 "trtype": "TCP", 00:18:05.719 "adrfam": "IPv4", 00:18:05.719 "traddr": "10.0.0.2", 00:18:05.719 "trsvcid": "4420" 00:18:05.719 }, 00:18:05.719 "peer_address": { 00:18:05.719 "trtype": "TCP", 00:18:05.719 "adrfam": "IPv4", 00:18:05.719 "traddr": "10.0.0.1", 00:18:05.719 "trsvcid": "56566" 00:18:05.719 }, 00:18:05.719 "auth": { 00:18:05.719 "state": "completed", 00:18:05.719 "digest": "sha384", 00:18:05.719 "dhgroup": "ffdhe8192" 00:18:05.719 } 00:18:05.719 } 00:18:05.719 ]' 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.719 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.980 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:05.980 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:06.553 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.814 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.076 00:18:07.076 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.076 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.076 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.337 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.337 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.337 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.337 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.337 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.337 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.337 { 00:18:07.337 "cntlid": 97, 00:18:07.337 "qid": 0, 00:18:07.337 "state": "enabled", 00:18:07.337 "thread": "nvmf_tgt_poll_group_000", 00:18:07.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:07.337 "listen_address": { 00:18:07.337 "trtype": "TCP", 00:18:07.337 "adrfam": "IPv4", 00:18:07.337 "traddr": "10.0.0.2", 00:18:07.337 "trsvcid": "4420" 00:18:07.337 }, 00:18:07.337 "peer_address": { 00:18:07.337 "trtype": "TCP", 00:18:07.337 "adrfam": "IPv4", 00:18:07.337 "traddr": "10.0.0.1", 00:18:07.337 "trsvcid": "41506" 00:18:07.337 }, 00:18:07.337 "auth": { 00:18:07.337 "state": "completed", 00:18:07.337 "digest": "sha512", 00:18:07.337 "dhgroup": "null" 00:18:07.337 } 00:18:07.337 } 00:18:07.337 ]' 00:18:07.337 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.337 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.337 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.337 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:07.337 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.337 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.337 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.337 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.599 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:07.599 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.541 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.801 00:18:08.801 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.802 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.802 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.062 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.062 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.062 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.062 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.062 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.062 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.062 { 00:18:09.062 "cntlid": 99, 00:18:09.062 "qid": 0, 00:18:09.062 "state": "enabled", 00:18:09.062 "thread": "nvmf_tgt_poll_group_000", 00:18:09.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:09.062 "listen_address": { 00:18:09.062 "trtype": "TCP", 00:18:09.062 "adrfam": "IPv4", 00:18:09.062 "traddr": "10.0.0.2", 00:18:09.062 "trsvcid": "4420" 00:18:09.062 }, 00:18:09.062 "peer_address": { 00:18:09.062 "trtype": "TCP", 00:18:09.062 "adrfam": "IPv4", 00:18:09.062 "traddr": "10.0.0.1", 00:18:09.062 "trsvcid": "41526" 00:18:09.062 }, 00:18:09.062 "auth": { 00:18:09.062 "state": "completed", 00:18:09.062 "digest": "sha512", 00:18:09.062 "dhgroup": "null" 00:18:09.062 } 00:18:09.062 } 00:18:09.062 ]' 00:18:09.062 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.062 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.062 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.062 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:09.062 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.062 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.063 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.063 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.324 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:09.324 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.268 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.528 00:18:10.528 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.528 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.529 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.790 { 00:18:10.790 "cntlid": 101, 00:18:10.790 "qid": 0, 00:18:10.790 "state": "enabled", 00:18:10.790 "thread": "nvmf_tgt_poll_group_000", 00:18:10.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:10.790 "listen_address": { 00:18:10.790 "trtype": "TCP", 00:18:10.790 "adrfam": "IPv4", 00:18:10.790 "traddr": "10.0.0.2", 00:18:10.790 "trsvcid": "4420" 00:18:10.790 }, 00:18:10.790 "peer_address": { 00:18:10.790 "trtype": "TCP", 00:18:10.790 "adrfam": "IPv4", 00:18:10.790 "traddr": "10.0.0.1", 00:18:10.790 "trsvcid": "41544" 00:18:10.790 }, 00:18:10.790 "auth": { 00:18:10.790 "state": "completed", 00:18:10.790 "digest": "sha512", 00:18:10.790 "dhgroup": "null" 00:18:10.790 } 00:18:10.790 } 00:18:10.790 ]' 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.790 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.050 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:11.051 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.993 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.253 00:18:12.253 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.253 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.253 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.515 { 00:18:12.515 "cntlid": 103, 00:18:12.515 "qid": 0, 00:18:12.515 "state": "enabled", 00:18:12.515 "thread": "nvmf_tgt_poll_group_000", 00:18:12.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:12.515 "listen_address": { 00:18:12.515 "trtype": "TCP", 00:18:12.515 "adrfam": "IPv4", 00:18:12.515 "traddr": "10.0.0.2", 00:18:12.515 "trsvcid": "4420" 00:18:12.515 }, 00:18:12.515 "peer_address": { 00:18:12.515 "trtype": "TCP", 00:18:12.515 "adrfam": "IPv4", 00:18:12.515 "traddr": "10.0.0.1", 00:18:12.515 "trsvcid": "41572" 00:18:12.515 }, 00:18:12.515 "auth": { 00:18:12.515 "state": "completed", 00:18:12.515 "digest": "sha512", 00:18:12.515 "dhgroup": "null" 00:18:12.515 } 00:18:12.515 } 00:18:12.515 ]' 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.515 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.775 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:12.775 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:13.346 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.608 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.868 00:18:13.868 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.868 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.868 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.129 { 00:18:14.129 "cntlid": 105, 00:18:14.129 "qid": 0, 00:18:14.129 "state": "enabled", 00:18:14.129 "thread": "nvmf_tgt_poll_group_000", 00:18:14.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:14.129 "listen_address": { 00:18:14.129 "trtype": "TCP", 00:18:14.129 "adrfam": "IPv4", 00:18:14.129 "traddr": "10.0.0.2", 00:18:14.129 "trsvcid": "4420" 00:18:14.129 }, 00:18:14.129 "peer_address": { 00:18:14.129 "trtype": "TCP", 00:18:14.129 "adrfam": "IPv4", 00:18:14.129 "traddr": "10.0.0.1", 00:18:14.129 "trsvcid": "41592" 00:18:14.129 }, 00:18:14.129 "auth": { 00:18:14.129 "state": "completed", 00:18:14.129 "digest": "sha512", 00:18:14.129 "dhgroup": "ffdhe2048" 00:18:14.129 } 00:18:14.129 } 00:18:14.129 ]' 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.129 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.390 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:14.390 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:15.363 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.363 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.363 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.363 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.363 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.363 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.363 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:15.364 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:15.364 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:15.364 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.364 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.364 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:15.364 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:15.364 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.364 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.364 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.364 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.364 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.364 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.364 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.364 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.690 00:18:15.690 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.690 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.690 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.983 { 00:18:15.983 "cntlid": 107, 00:18:15.983 "qid": 0, 00:18:15.983 "state": "enabled", 00:18:15.983 "thread": "nvmf_tgt_poll_group_000", 00:18:15.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:15.983 "listen_address": { 00:18:15.983 "trtype": "TCP", 00:18:15.983 "adrfam": "IPv4", 00:18:15.983 "traddr": "10.0.0.2", 00:18:15.983 "trsvcid": "4420" 00:18:15.983 }, 00:18:15.983 "peer_address": { 00:18:15.983 "trtype": "TCP", 00:18:15.983 "adrfam": "IPv4", 00:18:15.983 "traddr": "10.0.0.1", 00:18:15.983 "trsvcid": "41612" 00:18:15.983 }, 00:18:15.983 "auth": { 00:18:15.983 "state": "completed", 00:18:15.983 "digest": "sha512", 00:18:15.983 "dhgroup": "ffdhe2048" 00:18:15.983 } 00:18:15.983 } 00:18:15.983 ]' 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.983 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.245 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:16.245 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:16.818 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.818 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.818 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.818 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.818 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.818 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.818 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:16.818 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:17.079 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:17.079 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.079 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.079 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:17.079 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:17.079 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.079 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.079 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.079 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.080 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.080 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.080 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.080 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.340 00:18:17.340 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.340 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.340 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.600 { 00:18:17.600 "cntlid": 109, 00:18:17.600 "qid": 0, 00:18:17.600 "state": "enabled", 00:18:17.600 "thread": "nvmf_tgt_poll_group_000", 00:18:17.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:17.600 "listen_address": { 00:18:17.600 "trtype": "TCP", 00:18:17.600 "adrfam": "IPv4", 00:18:17.600 "traddr": "10.0.0.2", 00:18:17.600 "trsvcid": "4420" 00:18:17.600 }, 00:18:17.600 "peer_address": { 00:18:17.600 "trtype": "TCP", 00:18:17.600 "adrfam": "IPv4", 00:18:17.600 "traddr": "10.0.0.1", 00:18:17.600 "trsvcid": "46566" 00:18:17.600 }, 00:18:17.600 "auth": { 00:18:17.600 "state": "completed", 00:18:17.600 "digest": "sha512", 00:18:17.600 "dhgroup": "ffdhe2048" 00:18:17.600 } 00:18:17.600 } 00:18:17.600 ]' 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.600 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.860 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:17.860 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.802 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.803 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.803 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:18.803 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.803 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.063 00:18:19.063 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.063 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.063 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.323 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.323 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.323 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.323 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.323 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.323 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.323 { 00:18:19.323 "cntlid": 111, 00:18:19.323 "qid": 0, 00:18:19.323 "state": "enabled", 00:18:19.323 "thread": "nvmf_tgt_poll_group_000", 00:18:19.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:19.323 "listen_address": { 00:18:19.323 "trtype": "TCP", 00:18:19.323 "adrfam": "IPv4", 00:18:19.323 "traddr": "10.0.0.2", 00:18:19.323 "trsvcid": "4420" 00:18:19.323 }, 00:18:19.323 "peer_address": { 00:18:19.323 "trtype": "TCP", 00:18:19.323 "adrfam": "IPv4", 00:18:19.323 "traddr": "10.0.0.1", 00:18:19.323 "trsvcid": "46594" 00:18:19.323 }, 00:18:19.323 "auth": { 00:18:19.323 "state": "completed", 00:18:19.323 "digest": "sha512", 00:18:19.323 "dhgroup": "ffdhe2048" 00:18:19.323 } 00:18:19.323 } 00:18:19.323 ]' 00:18:19.323 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.323 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.323 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.323 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:19.323 07:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.323 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.323 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.323 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.584 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:19.584 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:20.527 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.527 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.527 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.527 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.527 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.527 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.527 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.528 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:20.528 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:20.528 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:20.528 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.528 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.528 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:20.528 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:20.528 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.528 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.528 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.528 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.528 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.528 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.528 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.528 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.789 00:18:20.789 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.789 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.789 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.049 { 00:18:21.049 "cntlid": 113, 00:18:21.049 "qid": 0, 00:18:21.049 "state": "enabled", 00:18:21.049 "thread": "nvmf_tgt_poll_group_000", 00:18:21.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:21.049 "listen_address": { 00:18:21.049 "trtype": "TCP", 00:18:21.049 "adrfam": "IPv4", 00:18:21.049 "traddr": "10.0.0.2", 00:18:21.049 "trsvcid": "4420" 00:18:21.049 }, 00:18:21.049 "peer_address": { 00:18:21.049 "trtype": "TCP", 00:18:21.049 "adrfam": "IPv4", 00:18:21.049 "traddr": "10.0.0.1", 00:18:21.049 "trsvcid": "46610" 00:18:21.049 }, 00:18:21.049 "auth": { 00:18:21.049 "state": "completed", 00:18:21.049 "digest": "sha512", 00:18:21.049 "dhgroup": "ffdhe3072" 00:18:21.049 } 00:18:21.049 } 00:18:21.049 ]' 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.049 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.311 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:21.311 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.256 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.518 00:18:22.518 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.518 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.518 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.780 { 00:18:22.780 "cntlid": 115, 00:18:22.780 "qid": 0, 00:18:22.780 "state": "enabled", 00:18:22.780 "thread": "nvmf_tgt_poll_group_000", 00:18:22.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:22.780 "listen_address": { 00:18:22.780 "trtype": "TCP", 00:18:22.780 "adrfam": "IPv4", 00:18:22.780 "traddr": "10.0.0.2", 00:18:22.780 "trsvcid": "4420" 00:18:22.780 }, 00:18:22.780 "peer_address": { 00:18:22.780 "trtype": "TCP", 00:18:22.780 "adrfam": "IPv4", 00:18:22.780 "traddr": "10.0.0.1", 00:18:22.780 "trsvcid": "46630" 00:18:22.780 }, 00:18:22.780 "auth": { 00:18:22.780 "state": "completed", 00:18:22.780 "digest": "sha512", 00:18:22.780 "dhgroup": "ffdhe3072" 00:18:22.780 } 00:18:22.780 } 00:18:22.780 ]' 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.780 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.041 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:23.041 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:23.984 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.984 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:23.984 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.984 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.984 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.984 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.984 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:23.984 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:23.984 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:23.985 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.985 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.985 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:23.985 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:23.985 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.985 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.985 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.985 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.985 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.985 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.985 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.985 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.246 00:18:24.246 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.246 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.246 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.506 { 00:18:24.506 "cntlid": 117, 00:18:24.506 "qid": 0, 00:18:24.506 "state": "enabled", 00:18:24.506 "thread": "nvmf_tgt_poll_group_000", 00:18:24.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:24.506 "listen_address": { 00:18:24.506 "trtype": "TCP", 00:18:24.506 "adrfam": "IPv4", 00:18:24.506 "traddr": "10.0.0.2", 00:18:24.506 "trsvcid": "4420" 00:18:24.506 }, 00:18:24.506 "peer_address": { 00:18:24.506 "trtype": "TCP", 00:18:24.506 "adrfam": "IPv4", 00:18:24.506 "traddr": "10.0.0.1", 00:18:24.506 "trsvcid": "46658" 00:18:24.506 }, 00:18:24.506 "auth": { 00:18:24.506 "state": "completed", 00:18:24.506 "digest": "sha512", 00:18:24.506 "dhgroup": "ffdhe3072" 00:18:24.506 } 00:18:24.506 } 00:18:24.506 ]' 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.506 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.768 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:24.768 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.712 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.972 00:18:25.972 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.972 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.972 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.232 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.232 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.232 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.232 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.232 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.232 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.232 { 00:18:26.232 "cntlid": 119, 00:18:26.232 "qid": 0, 00:18:26.232 "state": "enabled", 00:18:26.232 "thread": "nvmf_tgt_poll_group_000", 00:18:26.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:26.232 "listen_address": { 00:18:26.232 "trtype": "TCP", 00:18:26.233 "adrfam": "IPv4", 00:18:26.233 "traddr": "10.0.0.2", 00:18:26.233 "trsvcid": "4420" 00:18:26.233 }, 00:18:26.233 "peer_address": { 00:18:26.233 "trtype": "TCP", 00:18:26.233 "adrfam": "IPv4", 00:18:26.233 "traddr": "10.0.0.1", 00:18:26.233 "trsvcid": "46686" 00:18:26.233 }, 00:18:26.233 "auth": { 00:18:26.233 "state": "completed", 00:18:26.233 "digest": "sha512", 00:18:26.233 "dhgroup": "ffdhe3072" 00:18:26.233 } 00:18:26.233 } 00:18:26.233 ]' 00:18:26.233 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.233 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.233 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.233 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.233 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.233 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.233 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.233 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.494 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:26.494 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:27.438 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.438 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:27.438 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.438 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.438 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.438 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.438 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.438 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:27.438 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:27.438 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:27.438 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.438 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.438 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:27.438 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:27.438 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.438 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.438 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.438 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.438 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.438 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.438 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.438 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.699 00:18:27.699 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.699 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.699 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.960 { 00:18:27.960 "cntlid": 121, 00:18:27.960 "qid": 0, 00:18:27.960 "state": "enabled", 00:18:27.960 "thread": "nvmf_tgt_poll_group_000", 00:18:27.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:27.960 "listen_address": { 00:18:27.960 "trtype": "TCP", 00:18:27.960 "adrfam": "IPv4", 00:18:27.960 "traddr": "10.0.0.2", 00:18:27.960 "trsvcid": "4420" 00:18:27.960 }, 00:18:27.960 "peer_address": { 00:18:27.960 "trtype": "TCP", 00:18:27.960 "adrfam": "IPv4", 00:18:27.960 "traddr": "10.0.0.1", 00:18:27.960 "trsvcid": "55210" 00:18:27.960 }, 00:18:27.960 "auth": { 00:18:27.960 "state": "completed", 00:18:27.960 "digest": "sha512", 00:18:27.960 "dhgroup": "ffdhe4096" 00:18:27.960 } 00:18:27.960 } 00:18:27.960 ]' 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.960 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.221 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:28.221 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.164 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.424 00:18:29.424 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.424 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.425 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.685 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.685 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.685 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.685 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.685 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.685 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.685 { 00:18:29.685 "cntlid": 123, 00:18:29.685 "qid": 0, 00:18:29.685 "state": "enabled", 00:18:29.685 "thread": "nvmf_tgt_poll_group_000", 00:18:29.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:29.685 "listen_address": { 00:18:29.685 "trtype": "TCP", 00:18:29.685 "adrfam": "IPv4", 00:18:29.685 "traddr": "10.0.0.2", 00:18:29.685 "trsvcid": "4420" 00:18:29.685 }, 00:18:29.685 "peer_address": { 00:18:29.685 "trtype": "TCP", 00:18:29.685 "adrfam": "IPv4", 00:18:29.685 "traddr": "10.0.0.1", 00:18:29.685 "trsvcid": "55226" 00:18:29.685 }, 00:18:29.685 "auth": { 00:18:29.685 "state": "completed", 00:18:29.685 "digest": "sha512", 00:18:29.685 "dhgroup": "ffdhe4096" 00:18:29.685 } 00:18:29.685 } 00:18:29.685 ]' 00:18:29.685 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.685 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.685 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.685 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:29.685 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.686 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.686 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.686 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.946 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:29.946 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.888 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.148 00:18:31.148 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.148 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.148 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.408 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.408 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.408 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.408 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.408 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.408 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.408 { 00:18:31.408 "cntlid": 125, 00:18:31.408 "qid": 0, 00:18:31.408 "state": "enabled", 00:18:31.408 "thread": "nvmf_tgt_poll_group_000", 00:18:31.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:31.408 "listen_address": { 00:18:31.408 "trtype": "TCP", 00:18:31.408 "adrfam": "IPv4", 00:18:31.408 "traddr": "10.0.0.2", 00:18:31.408 "trsvcid": "4420" 00:18:31.408 }, 00:18:31.408 "peer_address": { 00:18:31.408 "trtype": "TCP", 00:18:31.408 "adrfam": "IPv4", 00:18:31.408 "traddr": "10.0.0.1", 00:18:31.408 "trsvcid": "55252" 00:18:31.408 }, 00:18:31.408 "auth": { 00:18:31.408 "state": "completed", 00:18:31.408 "digest": "sha512", 00:18:31.408 "dhgroup": "ffdhe4096" 00:18:31.408 } 00:18:31.408 } 00:18:31.408 ]' 00:18:31.408 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.408 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.408 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.408 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.408 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.668 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.668 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.668 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.668 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:31.668 07:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:32.610 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:32.871 00:18:32.871 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.871 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.871 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.132 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.132 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.132 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.132 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.132 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.132 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.132 { 00:18:33.132 "cntlid": 127, 00:18:33.132 "qid": 0, 00:18:33.132 "state": "enabled", 00:18:33.132 "thread": "nvmf_tgt_poll_group_000", 00:18:33.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:33.132 "listen_address": { 00:18:33.132 "trtype": "TCP", 00:18:33.132 "adrfam": "IPv4", 00:18:33.132 "traddr": "10.0.0.2", 00:18:33.132 "trsvcid": "4420" 00:18:33.132 }, 00:18:33.132 "peer_address": { 00:18:33.132 "trtype": "TCP", 00:18:33.132 "adrfam": "IPv4", 00:18:33.132 "traddr": "10.0.0.1", 00:18:33.132 "trsvcid": "55290" 00:18:33.132 }, 00:18:33.132 "auth": { 00:18:33.132 "state": "completed", 00:18:33.132 "digest": "sha512", 00:18:33.132 "dhgroup": "ffdhe4096" 00:18:33.132 } 00:18:33.132 } 00:18:33.132 ]' 00:18:33.132 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.132 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.132 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.132 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:33.132 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.393 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.393 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.393 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.393 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:33.393 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:34.335 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.335 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:34.335 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.335 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.335 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.335 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.335 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.335 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:34.335 07:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:34.335 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:34.335 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.335 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.335 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:34.335 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:34.335 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.335 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.335 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.335 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.335 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.335 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.335 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.335 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.907 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.907 { 00:18:34.907 "cntlid": 129, 00:18:34.907 "qid": 0, 00:18:34.907 "state": "enabled", 00:18:34.907 "thread": "nvmf_tgt_poll_group_000", 00:18:34.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:34.907 "listen_address": { 00:18:34.907 "trtype": "TCP", 00:18:34.907 "adrfam": "IPv4", 00:18:34.907 "traddr": "10.0.0.2", 00:18:34.907 "trsvcid": "4420" 00:18:34.907 }, 00:18:34.907 "peer_address": { 00:18:34.907 "trtype": "TCP", 00:18:34.907 "adrfam": "IPv4", 00:18:34.907 "traddr": "10.0.0.1", 00:18:34.907 "trsvcid": "55302" 00:18:34.907 }, 00:18:34.907 "auth": { 00:18:34.907 "state": "completed", 00:18:34.907 "digest": "sha512", 00:18:34.907 "dhgroup": "ffdhe6144" 00:18:34.907 } 00:18:34.907 } 00:18:34.907 ]' 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:34.907 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.168 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.168 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.168 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.168 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:35.168 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.112 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.683 00:18:36.683 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.683 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.683 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.683 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.683 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.683 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.683 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.683 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.683 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.683 { 00:18:36.683 "cntlid": 131, 00:18:36.683 "qid": 0, 00:18:36.683 "state": "enabled", 00:18:36.683 "thread": "nvmf_tgt_poll_group_000", 00:18:36.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:36.683 "listen_address": { 00:18:36.683 "trtype": "TCP", 00:18:36.683 "adrfam": "IPv4", 00:18:36.683 "traddr": "10.0.0.2", 00:18:36.683 "trsvcid": "4420" 00:18:36.683 }, 00:18:36.683 "peer_address": { 00:18:36.683 "trtype": "TCP", 00:18:36.683 "adrfam": "IPv4", 00:18:36.683 "traddr": "10.0.0.1", 00:18:36.683 "trsvcid": "55320" 00:18:36.683 }, 00:18:36.683 "auth": { 00:18:36.683 "state": "completed", 00:18:36.683 "digest": "sha512", 00:18:36.683 "dhgroup": "ffdhe6144" 00:18:36.683 } 00:18:36.683 } 00:18:36.683 ]' 00:18:36.683 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.683 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.683 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.944 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:36.944 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.944 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.944 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.944 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.205 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:37.205 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:37.777 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.777 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:37.777 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.777 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.777 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.777 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.777 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:37.777 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:38.039 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:38.039 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.039 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:38.039 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:38.039 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:38.039 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.039 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.039 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.039 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.039 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.039 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.039 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.039 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.298 00:18:38.298 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.298 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.298 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.558 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.558 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.558 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.558 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.558 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.558 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.558 { 00:18:38.558 "cntlid": 133, 00:18:38.558 "qid": 0, 00:18:38.558 "state": "enabled", 00:18:38.558 "thread": "nvmf_tgt_poll_group_000", 00:18:38.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:38.558 "listen_address": { 00:18:38.558 "trtype": "TCP", 00:18:38.558 "adrfam": "IPv4", 00:18:38.558 "traddr": "10.0.0.2", 00:18:38.558 "trsvcid": "4420" 00:18:38.558 }, 00:18:38.558 "peer_address": { 00:18:38.558 "trtype": "TCP", 00:18:38.558 "adrfam": "IPv4", 00:18:38.558 "traddr": "10.0.0.1", 00:18:38.558 "trsvcid": "40388" 00:18:38.558 }, 00:18:38.558 "auth": { 00:18:38.558 "state": "completed", 00:18:38.558 "digest": "sha512", 00:18:38.558 "dhgroup": "ffdhe6144" 00:18:38.558 } 00:18:38.558 } 00:18:38.558 ]' 00:18:38.558 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.558 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.558 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.558 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.558 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.820 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.820 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.820 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.820 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:38.820 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.761 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.333 00:18:40.333 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.333 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.333 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.333 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.333 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.333 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.333 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.333 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.333 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.333 { 00:18:40.333 "cntlid": 135, 00:18:40.333 "qid": 0, 00:18:40.333 "state": "enabled", 00:18:40.333 "thread": "nvmf_tgt_poll_group_000", 00:18:40.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:40.333 "listen_address": { 00:18:40.333 "trtype": "TCP", 00:18:40.333 "adrfam": "IPv4", 00:18:40.333 "traddr": "10.0.0.2", 00:18:40.333 "trsvcid": "4420" 00:18:40.333 }, 00:18:40.333 "peer_address": { 00:18:40.333 "trtype": "TCP", 00:18:40.333 "adrfam": "IPv4", 00:18:40.333 "traddr": "10.0.0.1", 00:18:40.333 "trsvcid": "40410" 00:18:40.333 }, 00:18:40.333 "auth": { 00:18:40.333 "state": "completed", 00:18:40.333 "digest": "sha512", 00:18:40.333 "dhgroup": "ffdhe6144" 00:18:40.333 } 00:18:40.333 } 00:18:40.333 ]' 00:18:40.333 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.593 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.593 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.593 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.593 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.593 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.593 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.593 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.852 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:40.853 07:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:41.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:41.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:41.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:41.682 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:41.682 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.682 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:41.682 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:41.682 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:41.682 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.682 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.682 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.682 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.683 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.683 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.683 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.683 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.253 00:18:42.253 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.253 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.253 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.514 { 00:18:42.514 "cntlid": 137, 00:18:42.514 "qid": 0, 00:18:42.514 "state": "enabled", 00:18:42.514 "thread": "nvmf_tgt_poll_group_000", 00:18:42.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:42.514 "listen_address": { 00:18:42.514 "trtype": "TCP", 00:18:42.514 "adrfam": "IPv4", 00:18:42.514 "traddr": "10.0.0.2", 00:18:42.514 "trsvcid": "4420" 00:18:42.514 }, 00:18:42.514 "peer_address": { 00:18:42.514 "trtype": "TCP", 00:18:42.514 "adrfam": "IPv4", 00:18:42.514 "traddr": "10.0.0.1", 00:18:42.514 "trsvcid": "40450" 00:18:42.514 }, 00:18:42.514 "auth": { 00:18:42.514 "state": "completed", 00:18:42.514 "digest": "sha512", 00:18:42.514 "dhgroup": "ffdhe8192" 00:18:42.514 } 00:18:42.514 } 00:18:42.514 ]' 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.514 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.775 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:42.775 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.719 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.720 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.291 00:18:44.291 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.291 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.291 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.553 { 00:18:44.553 "cntlid": 139, 00:18:44.553 "qid": 0, 00:18:44.553 "state": "enabled", 00:18:44.553 "thread": "nvmf_tgt_poll_group_000", 00:18:44.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:44.553 "listen_address": { 00:18:44.553 "trtype": "TCP", 00:18:44.553 "adrfam": "IPv4", 00:18:44.553 "traddr": "10.0.0.2", 00:18:44.553 "trsvcid": "4420" 00:18:44.553 }, 00:18:44.553 "peer_address": { 00:18:44.553 "trtype": "TCP", 00:18:44.553 "adrfam": "IPv4", 00:18:44.553 "traddr": "10.0.0.1", 00:18:44.553 "trsvcid": "40474" 00:18:44.553 }, 00:18:44.553 "auth": { 00:18:44.553 "state": "completed", 00:18:44.553 "digest": "sha512", 00:18:44.553 "dhgroup": "ffdhe8192" 00:18:44.553 } 00:18:44.553 } 00:18:44.553 ]' 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.553 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.815 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:44.815 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: --dhchap-ctrl-secret DHHC-1:02:YTM5NWY4ODNlYzM0YzA1ZGM5MDllNWYxYzYyMzQ3NzgxMDA4ZmU3NjY4YTJjZDY2wkZFFg==: 00:18:45.387 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.649 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.221 00:18:46.221 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.221 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.221 07:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.482 { 00:18:46.482 "cntlid": 141, 00:18:46.482 "qid": 0, 00:18:46.482 "state": "enabled", 00:18:46.482 "thread": "nvmf_tgt_poll_group_000", 00:18:46.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:46.482 "listen_address": { 00:18:46.482 "trtype": "TCP", 00:18:46.482 "adrfam": "IPv4", 00:18:46.482 "traddr": "10.0.0.2", 00:18:46.482 "trsvcid": "4420" 00:18:46.482 }, 00:18:46.482 "peer_address": { 00:18:46.482 "trtype": "TCP", 00:18:46.482 "adrfam": "IPv4", 00:18:46.482 "traddr": "10.0.0.1", 00:18:46.482 "trsvcid": "40496" 00:18:46.482 }, 00:18:46.482 "auth": { 00:18:46.482 "state": "completed", 00:18:46.482 "digest": "sha512", 00:18:46.482 "dhgroup": "ffdhe8192" 00:18:46.482 } 00:18:46.482 } 00:18:46.482 ]' 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.482 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.743 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:46.743 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:01:MjNlNmU2M2U1ZDM2MmNjZGJjOTQzZWQwOTc5YTQ2MjDtkiBf: 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.686 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.258 00:18:48.258 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.258 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.258 07:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.520 { 00:18:48.520 "cntlid": 143, 00:18:48.520 "qid": 0, 00:18:48.520 "state": "enabled", 00:18:48.520 "thread": "nvmf_tgt_poll_group_000", 00:18:48.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:48.520 "listen_address": { 00:18:48.520 "trtype": "TCP", 00:18:48.520 "adrfam": "IPv4", 00:18:48.520 "traddr": "10.0.0.2", 00:18:48.520 "trsvcid": "4420" 00:18:48.520 }, 00:18:48.520 "peer_address": { 00:18:48.520 "trtype": "TCP", 00:18:48.520 "adrfam": "IPv4", 00:18:48.520 "traddr": "10.0.0.1", 00:18:48.520 "trsvcid": "60490" 00:18:48.520 }, 00:18:48.520 "auth": { 00:18:48.520 "state": "completed", 00:18:48.520 "digest": "sha512", 00:18:48.520 "dhgroup": "ffdhe8192" 00:18:48.520 } 00:18:48.520 } 00:18:48.520 ]' 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.520 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.781 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:48.781 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.724 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.296 00:18:50.296 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.296 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.296 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.556 { 00:18:50.556 "cntlid": 145, 00:18:50.556 "qid": 0, 00:18:50.556 "state": "enabled", 00:18:50.556 "thread": "nvmf_tgt_poll_group_000", 00:18:50.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:50.556 "listen_address": { 00:18:50.556 "trtype": "TCP", 00:18:50.556 "adrfam": "IPv4", 00:18:50.556 "traddr": "10.0.0.2", 00:18:50.556 "trsvcid": "4420" 00:18:50.556 }, 00:18:50.556 "peer_address": { 00:18:50.556 "trtype": "TCP", 00:18:50.556 "adrfam": "IPv4", 00:18:50.556 "traddr": "10.0.0.1", 00:18:50.556 "trsvcid": "60526" 00:18:50.556 }, 00:18:50.556 "auth": { 00:18:50.556 "state": "completed", 00:18:50.556 "digest": "sha512", 00:18:50.556 "dhgroup": "ffdhe8192" 00:18:50.556 } 00:18:50.556 } 00:18:50.556 ]' 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.556 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.816 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:50.816 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YmM2OTlhYWFhNjFhNjljOWFlNDdhZjE3MjY5MjA3ZWY2ZmRmOWI2ODM5YTE0NmI3vFITqw==: --dhchap-ctrl-secret DHHC-1:03:MWFjMDRmOWM1NmYxNWM3MDgzYTE0ZGUyYWUzNGJhOTE1YWIyMTI0ODgwMTQwZDk0ODc3MWIwYzNkNmE2NWFjMQTrLJ4=: 00:18:51.387 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.387 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:51.387 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.387 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:51.647 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:51.648 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:51.909 request: 00:18:51.909 { 00:18:51.909 "name": "nvme0", 00:18:51.909 "trtype": "tcp", 00:18:51.909 "traddr": "10.0.0.2", 00:18:51.909 "adrfam": "ipv4", 00:18:51.909 "trsvcid": "4420", 00:18:51.909 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:51.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:51.909 "prchk_reftag": false, 00:18:51.909 "prchk_guard": false, 00:18:51.909 "hdgst": false, 00:18:51.909 "ddgst": false, 00:18:51.909 "dhchap_key": "key2", 00:18:51.909 "allow_unrecognized_csi": false, 00:18:51.909 "method": "bdev_nvme_attach_controller", 00:18:51.909 "req_id": 1 00:18:51.909 } 00:18:51.909 Got JSON-RPC error response 00:18:51.909 response: 00:18:51.909 { 00:18:51.909 "code": -5, 00:18:51.909 "message": "Input/output error" 00:18:51.909 } 00:18:51.909 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:51.909 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:51.909 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:51.909 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:51.909 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:51.909 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.909 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.169 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:52.170 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:52.430 request: 00:18:52.430 { 00:18:52.430 "name": "nvme0", 00:18:52.430 "trtype": "tcp", 00:18:52.430 "traddr": "10.0.0.2", 00:18:52.430 "adrfam": "ipv4", 00:18:52.430 "trsvcid": "4420", 00:18:52.430 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:52.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:52.430 "prchk_reftag": false, 00:18:52.430 "prchk_guard": false, 00:18:52.430 "hdgst": false, 00:18:52.430 "ddgst": false, 00:18:52.430 "dhchap_key": "key1", 00:18:52.430 "dhchap_ctrlr_key": "ckey2", 00:18:52.430 "allow_unrecognized_csi": false, 00:18:52.430 "method": "bdev_nvme_attach_controller", 00:18:52.430 "req_id": 1 00:18:52.430 } 00:18:52.430 Got JSON-RPC error response 00:18:52.430 response: 00:18:52.430 { 00:18:52.430 "code": -5, 00:18:52.430 "message": "Input/output error" 00:18:52.430 } 00:18:52.430 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.951 request: 00:18:52.951 { 00:18:52.951 "name": "nvme0", 00:18:52.951 "trtype": "tcp", 00:18:52.951 "traddr": "10.0.0.2", 00:18:52.951 "adrfam": "ipv4", 00:18:52.951 "trsvcid": "4420", 00:18:52.951 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:52.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:52.951 "prchk_reftag": false, 00:18:52.951 "prchk_guard": false, 00:18:52.951 "hdgst": false, 00:18:52.951 "ddgst": false, 00:18:52.951 "dhchap_key": "key1", 00:18:52.951 "dhchap_ctrlr_key": "ckey1", 00:18:52.951 "allow_unrecognized_csi": false, 00:18:52.951 "method": "bdev_nvme_attach_controller", 00:18:52.951 "req_id": 1 00:18:52.951 } 00:18:52.951 Got JSON-RPC error response 00:18:52.951 response: 00:18:52.951 { 00:18:52.951 "code": -5, 00:18:52.951 "message": "Input/output error" 00:18:52.951 } 00:18:52.951 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:52.951 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:52.951 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:52.951 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:52.951 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.951 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.951 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.951 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.951 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1253763 00:18:52.951 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1253763 ']' 00:18:52.952 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1253763 00:18:52.952 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:52.952 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:52.952 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1253763 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1253763' 00:18:53.212 killing process with pid 1253763 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1253763 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1253763 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1281444 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1281444 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1281444 ']' 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:53.212 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1281444 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1281444 ']' 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:53.475 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.735 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:53.735 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:53.735 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:53.735 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.735 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.735 null0 00:18:53.735 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.735 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:53.735 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nbJ 00:18:53.735 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.735 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.735 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.735 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.DYK ]] 00:18:53.735 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DYK 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.56C 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.SmA ]] 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SmA 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lyW 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.SPN ]] 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SPN 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.vPW 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.736 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.675 nvme0n1 00:18:54.675 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.675 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.675 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.957 { 00:18:54.957 "cntlid": 1, 00:18:54.957 "qid": 0, 00:18:54.957 "state": "enabled", 00:18:54.957 "thread": "nvmf_tgt_poll_group_000", 00:18:54.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:54.957 "listen_address": { 00:18:54.957 "trtype": "TCP", 00:18:54.957 "adrfam": "IPv4", 00:18:54.957 "traddr": "10.0.0.2", 00:18:54.957 "trsvcid": "4420" 00:18:54.957 }, 00:18:54.957 "peer_address": { 00:18:54.957 "trtype": "TCP", 00:18:54.957 "adrfam": "IPv4", 00:18:54.957 "traddr": "10.0.0.1", 00:18:54.957 "trsvcid": "60568" 00:18:54.957 }, 00:18:54.957 "auth": { 00:18:54.957 "state": "completed", 00:18:54.957 "digest": "sha512", 00:18:54.957 "dhgroup": "ffdhe8192" 00:18:54.957 } 00:18:54.957 } 00:18:54.957 ]' 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.957 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.314 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:55.314 07:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:55.923 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.923 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:55.923 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.923 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.923 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.923 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:55.923 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.923 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.923 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.923 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:55.923 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:56.184 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:56.184 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:56.184 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:56.184 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:56.184 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.184 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:56.184 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.184 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:56.184 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.184 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.446 request: 00:18:56.446 { 00:18:56.446 "name": "nvme0", 00:18:56.446 "trtype": "tcp", 00:18:56.446 "traddr": "10.0.0.2", 00:18:56.446 "adrfam": "ipv4", 00:18:56.446 "trsvcid": "4420", 00:18:56.446 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:56.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:56.446 "prchk_reftag": false, 00:18:56.446 "prchk_guard": false, 00:18:56.446 "hdgst": false, 00:18:56.446 "ddgst": false, 00:18:56.446 "dhchap_key": "key3", 00:18:56.446 "allow_unrecognized_csi": false, 00:18:56.446 "method": "bdev_nvme_attach_controller", 00:18:56.446 "req_id": 1 00:18:56.446 } 00:18:56.446 Got JSON-RPC error response 00:18:56.446 response: 00:18:56.446 { 00:18:56.446 "code": -5, 00:18:56.446 "message": "Input/output error" 00:18:56.446 } 00:18:56.446 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:56.446 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.446 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.446 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.446 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:56.446 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:56.446 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:56.446 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:56.446 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:56.446 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:56.446 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:56.446 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:56.446 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.446 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:56.446 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.446 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:56.446 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.446 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.707 request: 00:18:56.707 { 00:18:56.707 "name": "nvme0", 00:18:56.707 "trtype": "tcp", 00:18:56.707 "traddr": "10.0.0.2", 00:18:56.707 "adrfam": "ipv4", 00:18:56.707 "trsvcid": "4420", 00:18:56.707 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:56.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:56.707 "prchk_reftag": false, 00:18:56.707 "prchk_guard": false, 00:18:56.707 "hdgst": false, 00:18:56.707 "ddgst": false, 00:18:56.707 "dhchap_key": "key3", 00:18:56.707 "allow_unrecognized_csi": false, 00:18:56.707 "method": "bdev_nvme_attach_controller", 00:18:56.707 "req_id": 1 00:18:56.707 } 00:18:56.707 Got JSON-RPC error response 00:18:56.707 response: 00:18:56.707 { 00:18:56.707 "code": -5, 00:18:56.707 "message": "Input/output error" 00:18:56.707 } 00:18:56.707 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:56.707 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.707 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.707 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.707 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:56.707 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:56.707 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:56.707 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:56.707 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:56.707 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:56.968 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:56.969 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:57.230 request: 00:18:57.230 { 00:18:57.230 "name": "nvme0", 00:18:57.230 "trtype": "tcp", 00:18:57.230 "traddr": "10.0.0.2", 00:18:57.230 "adrfam": "ipv4", 00:18:57.230 "trsvcid": "4420", 00:18:57.230 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:57.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:57.230 "prchk_reftag": false, 00:18:57.230 "prchk_guard": false, 00:18:57.230 "hdgst": false, 00:18:57.230 "ddgst": false, 00:18:57.230 "dhchap_key": "key0", 00:18:57.230 "dhchap_ctrlr_key": "key1", 00:18:57.230 "allow_unrecognized_csi": false, 00:18:57.230 "method": "bdev_nvme_attach_controller", 00:18:57.230 "req_id": 1 00:18:57.230 } 00:18:57.230 Got JSON-RPC error response 00:18:57.230 response: 00:18:57.230 { 00:18:57.230 "code": -5, 00:18:57.230 "message": "Input/output error" 00:18:57.230 } 00:18:57.230 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:57.230 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:57.230 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:57.230 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:57.230 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:57.230 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:57.230 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:57.491 nvme0n1 00:18:57.491 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:57.491 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:57.491 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.751 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.751 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.751 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.751 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:57.751 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.751 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.751 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.751 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:57.751 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:57.751 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:58.694 nvme0n1 00:18:58.694 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:58.694 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.694 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:58.955 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.955 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:58.955 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.955 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.955 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.955 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:58.955 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.955 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:58.955 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.955 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:58.955 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: --dhchap-ctrl-secret DHHC-1:03:MjQ4Zjc3NWNjODBkNmZkZTE5ZmZjNDE1YjgzMTJmYzhlMmJiZjAyMDJlMDM1NTY3NGY5MWY5OThhNzYzZTU0Ms5YuOY=: 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:59.897 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:00.468 request: 00:19:00.468 { 00:19:00.468 "name": "nvme0", 00:19:00.468 "trtype": "tcp", 00:19:00.468 "traddr": "10.0.0.2", 00:19:00.468 "adrfam": "ipv4", 00:19:00.468 "trsvcid": "4420", 00:19:00.468 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:00.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:00.468 "prchk_reftag": false, 00:19:00.468 "prchk_guard": false, 00:19:00.468 "hdgst": false, 00:19:00.468 "ddgst": false, 00:19:00.468 "dhchap_key": "key1", 00:19:00.468 "allow_unrecognized_csi": false, 00:19:00.468 "method": "bdev_nvme_attach_controller", 00:19:00.468 "req_id": 1 00:19:00.468 } 00:19:00.468 Got JSON-RPC error response 00:19:00.468 response: 00:19:00.468 { 00:19:00.468 "code": -5, 00:19:00.468 "message": "Input/output error" 00:19:00.468 } 00:19:00.468 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:00.468 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:00.468 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:00.468 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:00.468 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:00.468 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:00.468 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:01.408 nvme0n1 00:19:01.408 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:01.408 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:01.408 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.408 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.408 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.408 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.668 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:01.668 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.668 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.668 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.668 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:01.668 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:01.668 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:01.928 nvme0n1 00:19:01.928 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:01.929 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.929 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: '' 2s 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: ]] 00:19:02.189 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmJkZGJlZjhlYzY3NTIxNWI5MTNhMzBkMTY1NzVlYWSX1V6C: 00:19:02.450 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:02.450 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:02.450 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: 2s 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: ]] 00:19:04.364 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDY1NzNiOGI2MjI2ZjdhMmQwYTEwZmI2ZWVmM2RkYjZhZTdlODIxODNkZTYwOTY5n3pC0A==: 00:19:04.364 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:04.364 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:06.277 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:06.277 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:19:06.277 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:06.277 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:06.277 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:06.277 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:19:06.277 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:19:06.277 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.538 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:06.538 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.538 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.538 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.538 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:06.538 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:06.538 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:07.479 nvme0n1 00:19:07.479 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:07.479 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.479 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.479 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.479 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:07.479 07:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:07.739 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:07.739 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.739 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:08.001 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.001 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:08.001 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.001 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.001 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.001 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:08.001 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:08.261 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:08.831 request: 00:19:08.831 { 00:19:08.832 "name": "nvme0", 00:19:08.832 "dhchap_key": "key1", 00:19:08.832 "dhchap_ctrlr_key": "key3", 00:19:08.832 "method": "bdev_nvme_set_keys", 00:19:08.832 "req_id": 1 00:19:08.832 } 00:19:08.832 Got JSON-RPC error response 00:19:08.832 response: 00:19:08.832 { 00:19:08.832 "code": -13, 00:19:08.832 "message": "Permission denied" 00:19:08.832 } 00:19:08.832 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:08.832 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:08.832 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:08.832 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:08.832 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:08.832 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:08.832 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.092 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:09.092 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:10.034 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:10.034 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:10.034 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.294 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:10.294 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:10.294 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.294 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.294 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.294 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:10.294 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:10.294 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:11.234 nvme0n1 00:19:11.234 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:11.234 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.234 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.234 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.234 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:11.234 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:11.234 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:11.234 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:11.234 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.234 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:11.234 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.234 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:11.234 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:11.804 request: 00:19:11.804 { 00:19:11.804 "name": "nvme0", 00:19:11.804 "dhchap_key": "key2", 00:19:11.804 "dhchap_ctrlr_key": "key0", 00:19:11.804 "method": "bdev_nvme_set_keys", 00:19:11.804 "req_id": 1 00:19:11.804 } 00:19:11.804 Got JSON-RPC error response 00:19:11.804 response: 00:19:11.804 { 00:19:11.804 "code": -13, 00:19:11.804 "message": "Permission denied" 00:19:11.804 } 00:19:11.804 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:11.804 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.804 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.804 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.804 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:11.804 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.804 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:12.064 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:12.064 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:13.012 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:13.012 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:13.012 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.272 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:13.272 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:14.273 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:14.273 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.273 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1254091 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1254091 ']' 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1254091 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1254091 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1254091' 00:19:14.533 killing process with pid 1254091 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1254091 00:19:14.533 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1254091 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:14.793 rmmod nvme_tcp 00:19:14.793 rmmod nvme_fabrics 00:19:14.793 rmmod nvme_keyring 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1281444 ']' 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1281444 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1281444 ']' 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1281444 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1281444 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1281444' 00:19:14.793 killing process with pid 1281444 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1281444 00:19:14.793 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1281444 00:19:15.053 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:15.053 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:15.053 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:15.053 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:15.053 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:15.053 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:15.053 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:15.053 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:15.053 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:15.053 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.053 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.053 07:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.967 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:16.967 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.nbJ /tmp/spdk.key-sha256.56C /tmp/spdk.key-sha384.lyW /tmp/spdk.key-sha512.vPW /tmp/spdk.key-sha512.DYK /tmp/spdk.key-sha384.SmA /tmp/spdk.key-sha256.SPN '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:16.967 00:19:16.967 real 2m46.638s 00:19:16.967 user 6m8.943s 00:19:16.967 sys 0m24.962s 00:19:16.967 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:16.967 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.967 ************************************ 00:19:16.967 END TEST nvmf_auth_target 00:19:16.967 ************************************ 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:17.229 ************************************ 00:19:17.229 START TEST nvmf_bdevio_no_huge 00:19:17.229 ************************************ 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:17.229 * Looking for test storage... 00:19:17.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.229 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:17.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.229 --rc genhtml_branch_coverage=1 00:19:17.229 --rc genhtml_function_coverage=1 00:19:17.229 --rc genhtml_legend=1 00:19:17.229 --rc geninfo_all_blocks=1 00:19:17.230 --rc geninfo_unexecuted_blocks=1 00:19:17.230 00:19:17.230 ' 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:17.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.230 --rc genhtml_branch_coverage=1 00:19:17.230 --rc genhtml_function_coverage=1 00:19:17.230 --rc genhtml_legend=1 00:19:17.230 --rc geninfo_all_blocks=1 00:19:17.230 --rc geninfo_unexecuted_blocks=1 00:19:17.230 00:19:17.230 ' 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:17.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.230 --rc genhtml_branch_coverage=1 00:19:17.230 --rc genhtml_function_coverage=1 00:19:17.230 --rc genhtml_legend=1 00:19:17.230 --rc geninfo_all_blocks=1 00:19:17.230 --rc geninfo_unexecuted_blocks=1 00:19:17.230 00:19:17.230 ' 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:17.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.230 --rc genhtml_branch_coverage=1 00:19:17.230 --rc genhtml_function_coverage=1 00:19:17.230 --rc genhtml_legend=1 00:19:17.230 --rc geninfo_all_blocks=1 00:19:17.230 --rc geninfo_unexecuted_blocks=1 00:19:17.230 00:19:17.230 ' 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.230 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.491 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.491 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.492 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:17.492 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:25.636 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:25.636 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:25.637 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:25.637 Found net devices under 0000:31:00.0: cvl_0_0 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:25.637 Found net devices under 0000:31:00.1: cvl_0_1 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:25.637 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:25.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:19:25.898 00:19:25.898 --- 10.0.0.2 ping statistics --- 00:19:25.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.898 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:25.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:19:25.898 00:19:25.898 --- 10.0.0.1 ping statistics --- 00:19:25.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.898 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1290562 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1290562 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 1290562 ']' 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:25.898 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:26.159 [2024-11-20 07:20:00.702456] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:19:26.159 [2024-11-20 07:20:00.702512] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:26.159 [2024-11-20 07:20:00.813620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.159 [2024-11-20 07:20:00.872647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.159 [2024-11-20 07:20:00.872692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.159 [2024-11-20 07:20:00.872701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.159 [2024-11-20 07:20:00.872708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.159 [2024-11-20 07:20:00.872715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.159 [2024-11-20 07:20:00.874591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:26.159 [2024-11-20 07:20:00.874750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:26.159 [2024-11-20 07:20:00.874926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:26.159 [2024-11-20 07:20:00.874944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:27.101 [2024-11-20 07:20:01.574314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:27.101 Malloc0 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:27.101 [2024-11-20 07:20:01.628165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:27.101 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:27.101 { 00:19:27.101 "params": { 00:19:27.101 "name": "Nvme$subsystem", 00:19:27.101 "trtype": "$TEST_TRANSPORT", 00:19:27.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.101 "adrfam": "ipv4", 00:19:27.101 "trsvcid": "$NVMF_PORT", 00:19:27.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.101 "hdgst": ${hdgst:-false}, 00:19:27.102 "ddgst": ${ddgst:-false} 00:19:27.102 }, 00:19:27.102 "method": "bdev_nvme_attach_controller" 00:19:27.102 } 00:19:27.102 EOF 00:19:27.102 )") 00:19:27.102 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:27.102 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:27.102 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:27.102 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:27.102 "params": { 00:19:27.102 "name": "Nvme1", 00:19:27.102 "trtype": "tcp", 00:19:27.102 "traddr": "10.0.0.2", 00:19:27.102 "adrfam": "ipv4", 00:19:27.102 "trsvcid": "4420", 00:19:27.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.102 "hdgst": false, 00:19:27.102 "ddgst": false 00:19:27.102 }, 00:19:27.102 "method": "bdev_nvme_attach_controller" 00:19:27.102 }' 00:19:27.102 [2024-11-20 07:20:01.687440] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:19:27.102 [2024-11-20 07:20:01.687509] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1290748 ] 00:19:27.102 [2024-11-20 07:20:01.775836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:27.102 [2024-11-20 07:20:01.831515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.102 [2024-11-20 07:20:01.831634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.102 [2024-11-20 07:20:01.831638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.363 I/O targets: 00:19:27.363 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:27.363 00:19:27.363 00:19:27.363 CUnit - A unit testing framework for C - Version 2.1-3 00:19:27.363 http://cunit.sourceforge.net/ 00:19:27.363 00:19:27.363 00:19:27.363 Suite: bdevio tests on: Nvme1n1 00:19:27.363 Test: blockdev write read block ...passed 00:19:27.363 Test: blockdev write zeroes read block ...passed 00:19:27.363 Test: blockdev write zeroes read no split ...passed 00:19:27.363 Test: blockdev write zeroes read split ...passed 00:19:27.624 Test: blockdev write zeroes read split partial ...passed 00:19:27.624 Test: blockdev reset ...[2024-11-20 07:20:02.179224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:27.624 [2024-11-20 07:20:02.179284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x111efb0 (9): Bad file descriptor 00:19:27.624 [2024-11-20 07:20:02.237423] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:27.624 passed 00:19:27.624 Test: blockdev write read 8 blocks ...passed 00:19:27.624 Test: blockdev write read size > 128k ...passed 00:19:27.624 Test: blockdev write read invalid size ...passed 00:19:27.624 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:27.624 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:27.624 Test: blockdev write read max offset ...passed 00:19:27.885 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:27.885 Test: blockdev writev readv 8 blocks ...passed 00:19:27.885 Test: blockdev writev readv 30 x 1block ...passed 00:19:27.885 Test: blockdev writev readv block ...passed 00:19:27.885 Test: blockdev writev readv size > 128k ...passed 00:19:27.885 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:27.885 Test: blockdev comparev and writev ...[2024-11-20 07:20:02.504423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.885 [2024-11-20 07:20:02.504449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.885 [2024-11-20 07:20:02.504460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.885 [2024-11-20 07:20:02.504466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:27.885 [2024-11-20 07:20:02.504956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.885 [2024-11-20 07:20:02.504965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:27.885 [2024-11-20 07:20:02.504978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.885 [2024-11-20 07:20:02.504984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:27.886 [2024-11-20 07:20:02.505447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.886 [2024-11-20 07:20:02.505455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:27.886 [2024-11-20 07:20:02.505464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.886 [2024-11-20 07:20:02.505470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:27.886 [2024-11-20 07:20:02.505959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.886 [2024-11-20 07:20:02.505967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:27.886 [2024-11-20 07:20:02.505976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:27.886 [2024-11-20 07:20:02.505982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:27.886 passed 00:19:27.886 Test: blockdev nvme passthru rw ...passed 00:19:27.886 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:20:02.590828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:27.886 [2024-11-20 07:20:02.590838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:27.886 [2024-11-20 07:20:02.591228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:27.886 [2024-11-20 07:20:02.591236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:27.886 [2024-11-20 07:20:02.591554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:27.886 [2024-11-20 07:20:02.591561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:27.886 [2024-11-20 07:20:02.591963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:27.886 [2024-11-20 07:20:02.591971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:27.886 passed 00:19:27.886 Test: blockdev nvme admin passthru ...passed 00:19:28.147 Test: blockdev copy ...passed 00:19:28.147 00:19:28.147 Run Summary: Type Total Ran Passed Failed Inactive 00:19:28.147 suites 1 1 n/a 0 0 00:19:28.147 tests 23 23 23 0 0 00:19:28.147 asserts 152 152 152 0 n/a 00:19:28.147 00:19:28.147 Elapsed time = 1.321 seconds 00:19:28.147 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:28.147 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.147 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:28.407 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.407 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:28.407 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:28.407 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.407 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:28.407 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:28.407 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:28.407 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:28.407 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:28.407 rmmod nvme_tcp 00:19:28.408 rmmod nvme_fabrics 00:19:28.408 rmmod nvme_keyring 00:19:28.408 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:28.408 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:28.408 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:28.408 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1290562 ']' 00:19:28.408 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1290562 00:19:28.408 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 1290562 ']' 00:19:28.408 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 1290562 00:19:28.408 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:19:28.408 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:28.408 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1290562 00:19:28.408 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:19:28.408 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:19:28.408 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1290562' 00:19:28.408 killing process with pid 1290562 00:19:28.408 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 1290562 00:19:28.408 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 1290562 00:19:28.668 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:28.668 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:28.668 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:28.668 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:28.668 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:28.668 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:28.669 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:28.669 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:28.669 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:28.669 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.669 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.669 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.582 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:30.842 00:19:30.842 real 0m13.573s 00:19:30.842 user 0m14.006s 00:19:30.842 sys 0m7.523s 00:19:30.842 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:30.842 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:30.842 ************************************ 00:19:30.842 END TEST nvmf_bdevio_no_huge 00:19:30.842 ************************************ 00:19:30.842 07:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:30.842 07:20:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:30.842 07:20:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:30.842 07:20:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:30.842 ************************************ 00:19:30.842 START TEST nvmf_tls 00:19:30.842 ************************************ 00:19:30.842 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:30.842 * Looking for test storage... 00:19:30.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:30.842 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:30.842 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:19:30.842 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.103 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:31.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.104 --rc genhtml_branch_coverage=1 00:19:31.104 --rc genhtml_function_coverage=1 00:19:31.104 --rc genhtml_legend=1 00:19:31.104 --rc geninfo_all_blocks=1 00:19:31.104 --rc geninfo_unexecuted_blocks=1 00:19:31.104 00:19:31.104 ' 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:31.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.104 --rc genhtml_branch_coverage=1 00:19:31.104 --rc genhtml_function_coverage=1 00:19:31.104 --rc genhtml_legend=1 00:19:31.104 --rc geninfo_all_blocks=1 00:19:31.104 --rc geninfo_unexecuted_blocks=1 00:19:31.104 00:19:31.104 ' 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:31.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.104 --rc genhtml_branch_coverage=1 00:19:31.104 --rc genhtml_function_coverage=1 00:19:31.104 --rc genhtml_legend=1 00:19:31.104 --rc geninfo_all_blocks=1 00:19:31.104 --rc geninfo_unexecuted_blocks=1 00:19:31.104 00:19:31.104 ' 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:31.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.104 --rc genhtml_branch_coverage=1 00:19:31.104 --rc genhtml_function_coverage=1 00:19:31.104 --rc genhtml_legend=1 00:19:31.104 --rc geninfo_all_blocks=1 00:19:31.104 --rc geninfo_unexecuted_blocks=1 00:19:31.104 00:19:31.104 ' 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:31.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.104 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.105 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.105 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:31.105 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:31.105 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:31.105 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.247 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:39.248 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:39.248 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:39.248 Found net devices under 0000:31:00.0: cvl_0_0 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:39.248 Found net devices under 0000:31:00.1: cvl_0_1 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.248 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:39.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:19:39.510 00:19:39.510 --- 10.0.0.2 ping statistics --- 00:19:39.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.510 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:19:39.510 00:19:39.510 --- 10.0.0.1 ping statistics --- 00:19:39.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.510 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1295776 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1295776 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1295776 ']' 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:39.510 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.773 [2024-11-20 07:20:14.303131] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:19:39.773 [2024-11-20 07:20:14.303196] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.773 [2024-11-20 07:20:14.379941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.773 [2024-11-20 07:20:14.417199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.773 [2024-11-20 07:20:14.417239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.773 [2024-11-20 07:20:14.417247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.773 [2024-11-20 07:20:14.417253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.773 [2024-11-20 07:20:14.417258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.773 [2024-11-20 07:20:14.417950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.773 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:39.773 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:39.773 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:39.773 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:39.773 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.034 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.034 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:40.034 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:40.034 true 00:19:40.034 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:40.034 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:40.295 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:40.295 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:40.295 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:40.556 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:40.556 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:40.556 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:40.556 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:40.556 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:40.817 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:40.817 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:41.078 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:41.078 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:41.078 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:41.078 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:41.078 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:41.078 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:41.078 07:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:41.340 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:41.340 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:41.602 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:41.602 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:41.602 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:41.863 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:42.124 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:42.124 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:42.124 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.WEHR1tfiTz 00:19:42.124 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:42.124 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.KOGKGNQnkf 00:19:42.124 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:42.124 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:42.124 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.WEHR1tfiTz 00:19:42.124 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.KOGKGNQnkf 00:19:42.124 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:42.124 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:42.385 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.WEHR1tfiTz 00:19:42.385 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WEHR1tfiTz 00:19:42.385 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:42.646 [2024-11-20 07:20:17.282431] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.646 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:42.907 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:42.907 [2024-11-20 07:20:17.635290] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:42.907 [2024-11-20 07:20:17.635597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.907 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:43.168 malloc0 00:19:43.168 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:43.429 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WEHR1tfiTz 00:19:43.429 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.690 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.WEHR1tfiTz 00:19:53.779 Initializing NVMe Controllers 00:19:53.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:53.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:53.779 Initialization complete. Launching workers. 00:19:53.779 ======================================================== 00:19:53.779 Latency(us) 00:19:53.779 Device Information : IOPS MiB/s Average min max 00:19:53.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18579.40 72.58 3444.71 1194.46 4095.20 00:19:53.779 ======================================================== 00:19:53.779 Total : 18579.40 72.58 3444.71 1194.46 4095.20 00:19:53.779 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WEHR1tfiTz 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WEHR1tfiTz 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1298552 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1298552 /var/tmp/bdevperf.sock 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1298552 ']' 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.779 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.779 [2024-11-20 07:20:28.499438] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:19:53.779 [2024-11-20 07:20:28.499498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298552 ] 00:19:54.070 [2024-11-20 07:20:28.562730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.070 [2024-11-20 07:20:28.591985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.070 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:54.070 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:54.070 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WEHR1tfiTz 00:19:54.331 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.331 [2024-11-20 07:20:28.978265] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.331 TLSTESTn1 00:19:54.331 07:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:54.597 Running I/O for 10 seconds... 00:19:56.481 5548.00 IOPS, 21.67 MiB/s [2024-11-20T06:20:32.190Z] 5745.50 IOPS, 22.44 MiB/s [2024-11-20T06:20:33.573Z] 5572.67 IOPS, 21.77 MiB/s [2024-11-20T06:20:34.513Z] 5698.00 IOPS, 22.26 MiB/s [2024-11-20T06:20:35.455Z] 5744.40 IOPS, 22.44 MiB/s [2024-11-20T06:20:36.396Z] 5785.00 IOPS, 22.60 MiB/s [2024-11-20T06:20:37.337Z] 5834.57 IOPS, 22.79 MiB/s [2024-11-20T06:20:38.279Z] 5907.50 IOPS, 23.08 MiB/s [2024-11-20T06:20:39.219Z] 5720.78 IOPS, 22.35 MiB/s [2024-11-20T06:20:39.219Z] 5685.70 IOPS, 22.21 MiB/s 00:20:04.452 Latency(us) 00:20:04.452 [2024-11-20T06:20:39.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.452 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:04.452 Verification LBA range: start 0x0 length 0x2000 00:20:04.452 TLSTESTn1 : 10.02 5689.08 22.22 0.00 0.00 22462.26 6498.99 66846.72 00:20:04.452 [2024-11-20T06:20:39.219Z] =================================================================================================================== 00:20:04.452 [2024-11-20T06:20:39.219Z] Total : 5689.08 22.22 0.00 0.00 22462.26 6498.99 66846.72 00:20:04.452 { 00:20:04.452 "results": [ 00:20:04.452 { 00:20:04.452 "job": "TLSTESTn1", 00:20:04.452 "core_mask": "0x4", 00:20:04.452 "workload": "verify", 00:20:04.452 "status": "finished", 00:20:04.452 "verify_range": { 00:20:04.452 "start": 0, 00:20:04.452 "length": 8192 00:20:04.452 }, 00:20:04.452 "queue_depth": 128, 00:20:04.452 "io_size": 4096, 00:20:04.452 "runtime": 10.016379, 00:20:04.452 "iops": 5689.081852833245, 00:20:04.452 "mibps": 22.222975987629862, 00:20:04.452 "io_failed": 0, 00:20:04.452 "io_timeout": 0, 00:20:04.452 "avg_latency_us": 22462.256554822405, 00:20:04.452 "min_latency_us": 6498.986666666667, 00:20:04.452 "max_latency_us": 66846.72 00:20:04.452 } 00:20:04.452 ], 00:20:04.452 "core_count": 1 00:20:04.452 } 00:20:04.452 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.452 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1298552 00:20:04.452 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1298552 ']' 00:20:04.452 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1298552 00:20:04.452 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:04.713 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:04.713 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1298552 00:20:04.713 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:04.713 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:04.713 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1298552' 00:20:04.713 killing process with pid 1298552 00:20:04.713 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1298552 00:20:04.713 Received shutdown signal, test time was about 10.000000 seconds 00:20:04.713 00:20:04.713 Latency(us) 00:20:04.713 [2024-11-20T06:20:39.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.713 [2024-11-20T06:20:39.480Z] =================================================================================================================== 00:20:04.713 [2024-11-20T06:20:39.480Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1298552 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KOGKGNQnkf 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KOGKGNQnkf 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KOGKGNQnkf 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KOGKGNQnkf 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1300689 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1300689 /var/tmp/bdevperf.sock 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1300689 ']' 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:04.714 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.714 [2024-11-20 07:20:39.439542] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:04.714 [2024-11-20 07:20:39.439600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300689 ] 00:20:04.975 [2024-11-20 07:20:39.503324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.975 [2024-11-20 07:20:39.532026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.975 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:04.975 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:04.975 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KOGKGNQnkf 00:20:05.235 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:05.235 [2024-11-20 07:20:39.926250] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.235 [2024-11-20 07:20:39.930832] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:05.235 [2024-11-20 07:20:39.931455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd0960 (107): Transport endpoint is not connected 00:20:05.235 [2024-11-20 07:20:39.932450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd0960 (9): Bad file descriptor 00:20:05.235 [2024-11-20 07:20:39.933452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:05.235 [2024-11-20 07:20:39.933459] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:05.235 [2024-11-20 07:20:39.933465] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:05.235 [2024-11-20 07:20:39.933472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:05.235 request: 00:20:05.235 { 00:20:05.235 "name": "TLSTEST", 00:20:05.235 "trtype": "tcp", 00:20:05.235 "traddr": "10.0.0.2", 00:20:05.235 "adrfam": "ipv4", 00:20:05.235 "trsvcid": "4420", 00:20:05.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:05.235 "prchk_reftag": false, 00:20:05.235 "prchk_guard": false, 00:20:05.235 "hdgst": false, 00:20:05.235 "ddgst": false, 00:20:05.235 "psk": "key0", 00:20:05.235 "allow_unrecognized_csi": false, 00:20:05.235 "method": "bdev_nvme_attach_controller", 00:20:05.235 "req_id": 1 00:20:05.235 } 00:20:05.235 Got JSON-RPC error response 00:20:05.235 response: 00:20:05.235 { 00:20:05.235 "code": -5, 00:20:05.235 "message": "Input/output error" 00:20:05.235 } 00:20:05.235 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1300689 00:20:05.235 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1300689 ']' 00:20:05.235 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1300689 00:20:05.235 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:05.235 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:05.235 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1300689 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1300689' 00:20:05.495 killing process with pid 1300689 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1300689 00:20:05.495 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.495 00:20:05.495 Latency(us) 00:20:05.495 [2024-11-20T06:20:40.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.495 [2024-11-20T06:20:40.262Z] =================================================================================================================== 00:20:05.495 [2024-11-20T06:20:40.262Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1300689 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WEHR1tfiTz 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WEHR1tfiTz 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:05.495 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WEHR1tfiTz 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WEHR1tfiTz 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1300709 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1300709 /var/tmp/bdevperf.sock 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1300709 ']' 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:05.496 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.496 [2024-11-20 07:20:40.166462] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:05.496 [2024-11-20 07:20:40.166521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300709 ] 00:20:05.496 [2024-11-20 07:20:40.229377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.496 [2024-11-20 07:20:40.258621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.756 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:05.756 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:05.756 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WEHR1tfiTz 00:20:05.756 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:06.016 [2024-11-20 07:20:40.668993] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.016 [2024-11-20 07:20:40.673382] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:06.016 [2024-11-20 07:20:40.673400] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:06.016 [2024-11-20 07:20:40.673419] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:06.016 [2024-11-20 07:20:40.674073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176d960 (107): Transport endpoint is not connected 00:20:06.016 [2024-11-20 07:20:40.675067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176d960 (9): Bad file descriptor 00:20:06.016 [2024-11-20 07:20:40.676070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:06.016 [2024-11-20 07:20:40.676077] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:06.016 [2024-11-20 07:20:40.676083] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:06.016 [2024-11-20 07:20:40.676090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:06.016 request: 00:20:06.016 { 00:20:06.016 "name": "TLSTEST", 00:20:06.016 "trtype": "tcp", 00:20:06.016 "traddr": "10.0.0.2", 00:20:06.016 "adrfam": "ipv4", 00:20:06.016 "trsvcid": "4420", 00:20:06.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.016 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:06.016 "prchk_reftag": false, 00:20:06.016 "prchk_guard": false, 00:20:06.016 "hdgst": false, 00:20:06.016 "ddgst": false, 00:20:06.016 "psk": "key0", 00:20:06.016 "allow_unrecognized_csi": false, 00:20:06.016 "method": "bdev_nvme_attach_controller", 00:20:06.016 "req_id": 1 00:20:06.016 } 00:20:06.016 Got JSON-RPC error response 00:20:06.016 response: 00:20:06.016 { 00:20:06.016 "code": -5, 00:20:06.016 "message": "Input/output error" 00:20:06.016 } 00:20:06.016 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1300709 00:20:06.016 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1300709 ']' 00:20:06.016 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1300709 00:20:06.016 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:06.016 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:06.016 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1300709 00:20:06.016 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:06.016 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:06.016 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1300709' 00:20:06.016 killing process with pid 1300709 00:20:06.016 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1300709 00:20:06.016 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.016 00:20:06.016 Latency(us) 00:20:06.016 [2024-11-20T06:20:40.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.016 [2024-11-20T06:20:40.783Z] =================================================================================================================== 00:20:06.016 [2024-11-20T06:20:40.783Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.016 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1300709 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WEHR1tfiTz 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WEHR1tfiTz 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WEHR1tfiTz 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WEHR1tfiTz 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1300985 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1300985 /var/tmp/bdevperf.sock 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1300985 ']' 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:06.275 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.275 [2024-11-20 07:20:40.916096] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:06.275 [2024-11-20 07:20:40.916154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300985 ] 00:20:06.275 [2024-11-20 07:20:40.978968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.275 [2024-11-20 07:20:41.007494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.536 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:06.536 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:06.536 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WEHR1tfiTz 00:20:06.536 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:06.796 [2024-11-20 07:20:41.393587] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.796 [2024-11-20 07:20:41.402488] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:06.796 [2024-11-20 07:20:41.402505] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:06.796 [2024-11-20 07:20:41.402526] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:06.796 [2024-11-20 07:20:41.402680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24bd960 (107): Transport endpoint is not connected 00:20:06.796 [2024-11-20 07:20:41.403671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24bd960 (9): Bad file descriptor 00:20:06.796 [2024-11-20 07:20:41.404673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:06.796 [2024-11-20 07:20:41.404684] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:06.796 [2024-11-20 07:20:41.404690] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:06.796 [2024-11-20 07:20:41.404699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:06.796 request: 00:20:06.796 { 00:20:06.796 "name": "TLSTEST", 00:20:06.796 "trtype": "tcp", 00:20:06.796 "traddr": "10.0.0.2", 00:20:06.796 "adrfam": "ipv4", 00:20:06.796 "trsvcid": "4420", 00:20:06.796 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:06.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.796 "prchk_reftag": false, 00:20:06.796 "prchk_guard": false, 00:20:06.796 "hdgst": false, 00:20:06.796 "ddgst": false, 00:20:06.796 "psk": "key0", 00:20:06.796 "allow_unrecognized_csi": false, 00:20:06.796 "method": "bdev_nvme_attach_controller", 00:20:06.796 "req_id": 1 00:20:06.796 } 00:20:06.796 Got JSON-RPC error response 00:20:06.796 response: 00:20:06.796 { 00:20:06.796 "code": -5, 00:20:06.796 "message": "Input/output error" 00:20:06.796 } 00:20:06.796 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1300985 00:20:06.796 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1300985 ']' 00:20:06.796 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1300985 00:20:06.796 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:06.796 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:06.796 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1300985 00:20:06.796 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:06.796 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:06.797 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1300985' 00:20:06.797 killing process with pid 1300985 00:20:06.797 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1300985 00:20:06.797 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.797 00:20:06.797 Latency(us) 00:20:06.797 [2024-11-20T06:20:41.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.797 [2024-11-20T06:20:41.564Z] =================================================================================================================== 00:20:06.797 [2024-11-20T06:20:41.564Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.797 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1300985 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1301064 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1301064 /var/tmp/bdevperf.sock 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1301064 ']' 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.057 [2024-11-20 07:20:41.629700] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:07.057 [2024-11-20 07:20:41.629756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301064 ] 00:20:07.057 [2024-11-20 07:20:41.692799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.057 [2024-11-20 07:20:41.721322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:07.057 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:07.318 [2024-11-20 07:20:41.950962] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:07.318 [2024-11-20 07:20:41.950984] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:07.318 request: 00:20:07.318 { 00:20:07.318 "name": "key0", 00:20:07.318 "path": "", 00:20:07.318 "method": "keyring_file_add_key", 00:20:07.318 "req_id": 1 00:20:07.318 } 00:20:07.318 Got JSON-RPC error response 00:20:07.318 response: 00:20:07.318 { 00:20:07.318 "code": -1, 00:20:07.318 "message": "Operation not permitted" 00:20:07.318 } 00:20:07.318 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:07.579 [2024-11-20 07:20:42.103417] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.579 [2024-11-20 07:20:42.103437] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:07.579 request: 00:20:07.579 { 00:20:07.579 "name": "TLSTEST", 00:20:07.579 "trtype": "tcp", 00:20:07.579 "traddr": "10.0.0.2", 00:20:07.579 "adrfam": "ipv4", 00:20:07.579 "trsvcid": "4420", 00:20:07.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.579 "prchk_reftag": false, 00:20:07.579 "prchk_guard": false, 00:20:07.579 "hdgst": false, 00:20:07.579 "ddgst": false, 00:20:07.579 "psk": "key0", 00:20:07.579 "allow_unrecognized_csi": false, 00:20:07.579 "method": "bdev_nvme_attach_controller", 00:20:07.579 "req_id": 1 00:20:07.579 } 00:20:07.579 Got JSON-RPC error response 00:20:07.579 response: 00:20:07.579 { 00:20:07.579 "code": -126, 00:20:07.579 "message": "Required key not available" 00:20:07.579 } 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1301064 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1301064 ']' 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1301064 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1301064 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1301064' 00:20:07.579 killing process with pid 1301064 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1301064 00:20:07.579 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.579 00:20:07.579 Latency(us) 00:20:07.579 [2024-11-20T06:20:42.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.579 [2024-11-20T06:20:42.346Z] =================================================================================================================== 00:20:07.579 [2024-11-20T06:20:42.346Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1301064 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1295776 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1295776 ']' 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1295776 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:07.579 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1295776 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1295776' 00:20:07.840 killing process with pid 1295776 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1295776 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1295776 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.5ABiPJtRFp 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.5ABiPJtRFp 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1301312 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1301312 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1301312 ']' 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.840 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:07.841 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.841 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:07.841 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.841 [2024-11-20 07:20:42.571390] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:07.841 [2024-11-20 07:20:42.571456] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.101 [2024-11-20 07:20:42.667981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.101 [2024-11-20 07:20:42.698641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.101 [2024-11-20 07:20:42.698670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.101 [2024-11-20 07:20:42.698675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.101 [2024-11-20 07:20:42.698680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.101 [2024-11-20 07:20:42.698687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.101 [2024-11-20 07:20:42.699167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.672 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:08.672 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:08.672 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:08.672 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:08.672 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.672 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.672 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.5ABiPJtRFp 00:20:08.672 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5ABiPJtRFp 00:20:08.672 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:08.933 [2024-11-20 07:20:43.552573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.933 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:09.193 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:09.193 [2024-11-20 07:20:43.877361] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.193 [2024-11-20 07:20:43.877558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.193 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:09.454 malloc0 00:20:09.454 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:09.714 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5ABiPJtRFp 00:20:09.714 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5ABiPJtRFp 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5ABiPJtRFp 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1301766 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1301766 /var/tmp/bdevperf.sock 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1301766 ']' 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:09.975 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.975 [2024-11-20 07:20:44.603224] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:09.975 [2024-11-20 07:20:44.603278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301766 ] 00:20:09.975 [2024-11-20 07:20:44.666883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.975 [2024-11-20 07:20:44.695724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.236 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:10.236 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:10.236 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5ABiPJtRFp 00:20:10.236 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:10.496 [2024-11-20 07:20:45.114001] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.496 TLSTESTn1 00:20:10.496 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:10.757 Running I/O for 10 seconds... 00:20:12.644 5398.00 IOPS, 21.09 MiB/s [2024-11-20T06:20:48.354Z] 5002.50 IOPS, 19.54 MiB/s [2024-11-20T06:20:49.739Z] 5384.67 IOPS, 21.03 MiB/s [2024-11-20T06:20:50.312Z] 5609.75 IOPS, 21.91 MiB/s [2024-11-20T06:20:51.698Z] 5736.20 IOPS, 22.41 MiB/s [2024-11-20T06:20:52.641Z] 5784.33 IOPS, 22.60 MiB/s [2024-11-20T06:20:53.585Z] 5721.14 IOPS, 22.35 MiB/s [2024-11-20T06:20:54.527Z] 5750.75 IOPS, 22.46 MiB/s [2024-11-20T06:20:55.470Z] 5696.56 IOPS, 22.25 MiB/s [2024-11-20T06:20:55.470Z] 5616.80 IOPS, 21.94 MiB/s 00:20:20.703 Latency(us) 00:20:20.703 [2024-11-20T06:20:55.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.703 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:20.703 Verification LBA range: start 0x0 length 0x2000 00:20:20.703 TLSTESTn1 : 10.02 5620.31 21.95 0.00 0.00 22741.36 4614.83 40632.32 00:20:20.703 [2024-11-20T06:20:55.470Z] =================================================================================================================== 00:20:20.703 [2024-11-20T06:20:55.470Z] Total : 5620.31 21.95 0.00 0.00 22741.36 4614.83 40632.32 00:20:20.703 { 00:20:20.703 "results": [ 00:20:20.703 { 00:20:20.703 "job": "TLSTESTn1", 00:20:20.703 "core_mask": "0x4", 00:20:20.703 "workload": "verify", 00:20:20.703 "status": "finished", 00:20:20.703 "verify_range": { 00:20:20.703 "start": 0, 00:20:20.703 "length": 8192 00:20:20.703 }, 00:20:20.703 "queue_depth": 128, 00:20:20.703 "io_size": 4096, 00:20:20.703 "runtime": 10.016356, 00:20:20.703 "iops": 5620.307425175383, 00:20:20.703 "mibps": 21.95432587959134, 00:20:20.703 "io_failed": 0, 00:20:20.703 "io_timeout": 0, 00:20:20.703 "avg_latency_us": 22741.36089102052, 00:20:20.703 "min_latency_us": 4614.826666666667, 00:20:20.703 "max_latency_us": 40632.32 00:20:20.703 } 00:20:20.703 ], 00:20:20.703 "core_count": 1 00:20:20.703 } 00:20:20.703 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:20.703 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1301766 00:20:20.703 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1301766 ']' 00:20:20.703 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1301766 00:20:20.703 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:20.703 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.703 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1301766 00:20:20.703 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:20.703 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:20.703 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1301766' 00:20:20.703 killing process with pid 1301766 00:20:20.703 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1301766 00:20:20.703 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.703 00:20:20.703 Latency(us) 00:20:20.703 [2024-11-20T06:20:55.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.703 [2024-11-20T06:20:55.470Z] =================================================================================================================== 00:20:20.703 [2024-11-20T06:20:55.471Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.704 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1301766 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.5ABiPJtRFp 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5ABiPJtRFp 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5ABiPJtRFp 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5ABiPJtRFp 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5ABiPJtRFp 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1303787 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1303787 /var/tmp/bdevperf.sock 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1303787 ']' 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:20.965 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.965 [2024-11-20 07:20:55.591751] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:20.965 [2024-11-20 07:20:55.591808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303787 ] 00:20:20.965 [2024-11-20 07:20:55.656493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.965 [2024-11-20 07:20:55.684363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.227 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:21.227 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:21.227 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5ABiPJtRFp 00:20:21.227 [2024-11-20 07:20:55.918133] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5ABiPJtRFp': 0100666 00:20:21.227 [2024-11-20 07:20:55.918158] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:21.227 request: 00:20:21.227 { 00:20:21.227 "name": "key0", 00:20:21.227 "path": "/tmp/tmp.5ABiPJtRFp", 00:20:21.227 "method": "keyring_file_add_key", 00:20:21.227 "req_id": 1 00:20:21.227 } 00:20:21.227 Got JSON-RPC error response 00:20:21.227 response: 00:20:21.227 { 00:20:21.227 "code": -1, 00:20:21.227 "message": "Operation not permitted" 00:20:21.227 } 00:20:21.227 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:21.488 [2024-11-20 07:20:56.098658] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.488 [2024-11-20 07:20:56.098679] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:21.488 request: 00:20:21.488 { 00:20:21.488 "name": "TLSTEST", 00:20:21.488 "trtype": "tcp", 00:20:21.488 "traddr": "10.0.0.2", 00:20:21.488 "adrfam": "ipv4", 00:20:21.488 "trsvcid": "4420", 00:20:21.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.488 "prchk_reftag": false, 00:20:21.488 "prchk_guard": false, 00:20:21.488 "hdgst": false, 00:20:21.488 "ddgst": false, 00:20:21.488 "psk": "key0", 00:20:21.488 "allow_unrecognized_csi": false, 00:20:21.488 "method": "bdev_nvme_attach_controller", 00:20:21.488 "req_id": 1 00:20:21.488 } 00:20:21.488 Got JSON-RPC error response 00:20:21.488 response: 00:20:21.488 { 00:20:21.488 "code": -126, 00:20:21.488 "message": "Required key not available" 00:20:21.488 } 00:20:21.488 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1303787 00:20:21.488 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1303787 ']' 00:20:21.488 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1303787 00:20:21.488 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:21.488 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:21.488 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1303787 00:20:21.488 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:21.488 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:21.488 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1303787' 00:20:21.488 killing process with pid 1303787 00:20:21.488 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1303787 00:20:21.488 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.488 00:20:21.488 Latency(us) 00:20:21.488 [2024-11-20T06:20:56.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.488 [2024-11-20T06:20:56.255Z] =================================================================================================================== 00:20:21.488 [2024-11-20T06:20:56.255Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.488 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1303787 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1301312 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1301312 ']' 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1301312 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1301312 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1301312' 00:20:21.750 killing process with pid 1301312 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1301312 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1301312 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1304041 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1304041 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1304041 ']' 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.750 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.012 [2024-11-20 07:20:56.536123] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:22.012 [2024-11-20 07:20:56.536205] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.012 [2024-11-20 07:20:56.634729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.012 [2024-11-20 07:20:56.668170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.012 [2024-11-20 07:20:56.668206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.012 [2024-11-20 07:20:56.668212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.012 [2024-11-20 07:20:56.668217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.012 [2024-11-20 07:20:56.668221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.012 [2024-11-20 07:20:56.668754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.584 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:22.584 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:22.584 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:22.584 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:22.584 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.845 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.845 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.5ABiPJtRFp 00:20:22.845 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:22.845 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.5ABiPJtRFp 00:20:22.845 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:22.845 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:22.845 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:22.845 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:22.845 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.5ABiPJtRFp 00:20:22.845 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5ABiPJtRFp 00:20:22.845 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:22.845 [2024-11-20 07:20:57.511585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.845 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:23.106 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:23.106 [2024-11-20 07:20:57.836378] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.106 [2024-11-20 07:20:57.836564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.106 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:23.367 malloc0 00:20:23.368 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:23.629 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5ABiPJtRFp 00:20:23.629 [2024-11-20 07:20:58.323427] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5ABiPJtRFp': 0100666 00:20:23.629 [2024-11-20 07:20:58.323457] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:23.629 request: 00:20:23.629 { 00:20:23.629 "name": "key0", 00:20:23.629 "path": "/tmp/tmp.5ABiPJtRFp", 00:20:23.629 "method": "keyring_file_add_key", 00:20:23.629 "req_id": 1 00:20:23.629 } 00:20:23.629 Got JSON-RPC error response 00:20:23.629 response: 00:20:23.629 { 00:20:23.629 "code": -1, 00:20:23.629 "message": "Operation not permitted" 00:20:23.629 } 00:20:23.629 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:23.889 [2024-11-20 07:20:58.487851] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:23.889 [2024-11-20 07:20:58.487883] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:23.889 request: 00:20:23.889 { 00:20:23.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.890 "host": "nqn.2016-06.io.spdk:host1", 00:20:23.890 "psk": "key0", 00:20:23.890 "method": "nvmf_subsystem_add_host", 00:20:23.890 "req_id": 1 00:20:23.890 } 00:20:23.890 Got JSON-RPC error response 00:20:23.890 response: 00:20:23.890 { 00:20:23.890 "code": -32603, 00:20:23.890 "message": "Internal error" 00:20:23.890 } 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1304041 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1304041 ']' 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1304041 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1304041 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1304041' 00:20:23.890 killing process with pid 1304041 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1304041 00:20:23.890 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1304041 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.5ABiPJtRFp 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1304502 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1304502 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1304502 ']' 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:24.151 07:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.151 [2024-11-20 07:20:58.745243] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:24.151 [2024-11-20 07:20:58.745302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.151 [2024-11-20 07:20:58.843967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.151 [2024-11-20 07:20:58.873338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.151 [2024-11-20 07:20:58.873366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.151 [2024-11-20 07:20:58.873371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.151 [2024-11-20 07:20:58.873376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.151 [2024-11-20 07:20:58.873380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.151 [2024-11-20 07:20:58.873859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.095 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:25.095 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:25.095 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.095 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:25.095 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.095 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.095 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.5ABiPJtRFp 00:20:25.095 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5ABiPJtRFp 00:20:25.095 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:25.095 [2024-11-20 07:20:59.766614] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.095 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:25.357 07:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:25.357 [2024-11-20 07:21:00.087409] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.357 [2024-11-20 07:21:00.087611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.357 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:25.619 malloc0 00:20:25.619 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:25.880 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5ABiPJtRFp 00:20:25.880 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:26.140 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1304910 00:20:26.140 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.140 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:26.140 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1304910 /var/tmp/bdevperf.sock 00:20:26.140 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1304910 ']' 00:20:26.140 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.140 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:26.140 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.140 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:26.140 07:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.140 [2024-11-20 07:21:00.805480] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:26.140 [2024-11-20 07:21:00.805542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304910 ] 00:20:26.140 [2024-11-20 07:21:00.869668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.140 [2024-11-20 07:21:00.898846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.082 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:27.082 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:27.082 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5ABiPJtRFp 00:20:27.082 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:27.343 [2024-11-20 07:21:01.902616] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.343 TLSTESTn1 00:20:27.343 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:27.603 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:27.603 "subsystems": [ 00:20:27.603 { 00:20:27.603 "subsystem": "keyring", 00:20:27.603 "config": [ 00:20:27.603 { 00:20:27.603 "method": "keyring_file_add_key", 00:20:27.603 "params": { 00:20:27.603 "name": "key0", 00:20:27.603 "path": "/tmp/tmp.5ABiPJtRFp" 00:20:27.603 } 00:20:27.603 } 00:20:27.603 ] 00:20:27.603 }, 00:20:27.603 { 00:20:27.603 "subsystem": "iobuf", 00:20:27.603 "config": [ 00:20:27.603 { 00:20:27.603 "method": "iobuf_set_options", 00:20:27.603 "params": { 00:20:27.603 "small_pool_count": 8192, 00:20:27.603 "large_pool_count": 1024, 00:20:27.603 "small_bufsize": 8192, 00:20:27.603 "large_bufsize": 135168, 00:20:27.603 "enable_numa": false 00:20:27.603 } 00:20:27.603 } 00:20:27.603 ] 00:20:27.603 }, 00:20:27.603 { 00:20:27.603 "subsystem": "sock", 00:20:27.603 "config": [ 00:20:27.603 { 00:20:27.603 "method": "sock_set_default_impl", 00:20:27.603 "params": { 00:20:27.603 "impl_name": "posix" 00:20:27.603 } 00:20:27.603 }, 00:20:27.603 { 00:20:27.603 "method": "sock_impl_set_options", 00:20:27.603 "params": { 00:20:27.603 "impl_name": "ssl", 00:20:27.603 "recv_buf_size": 4096, 00:20:27.603 "send_buf_size": 4096, 00:20:27.603 "enable_recv_pipe": true, 00:20:27.603 "enable_quickack": false, 00:20:27.603 "enable_placement_id": 0, 00:20:27.603 "enable_zerocopy_send_server": true, 00:20:27.603 "enable_zerocopy_send_client": false, 00:20:27.603 "zerocopy_threshold": 0, 00:20:27.603 "tls_version": 0, 00:20:27.603 "enable_ktls": false 00:20:27.603 } 00:20:27.603 }, 00:20:27.603 { 00:20:27.603 "method": "sock_impl_set_options", 00:20:27.603 "params": { 00:20:27.603 "impl_name": "posix", 00:20:27.603 "recv_buf_size": 2097152, 00:20:27.603 "send_buf_size": 2097152, 00:20:27.603 "enable_recv_pipe": true, 00:20:27.603 "enable_quickack": false, 00:20:27.603 "enable_placement_id": 0, 00:20:27.603 "enable_zerocopy_send_server": true, 00:20:27.603 "enable_zerocopy_send_client": false, 00:20:27.603 "zerocopy_threshold": 0, 00:20:27.603 "tls_version": 0, 00:20:27.603 "enable_ktls": false 00:20:27.603 } 00:20:27.603 } 00:20:27.603 ] 00:20:27.603 }, 00:20:27.603 { 00:20:27.603 "subsystem": "vmd", 00:20:27.603 "config": [] 00:20:27.603 }, 00:20:27.603 { 00:20:27.603 "subsystem": "accel", 00:20:27.603 "config": [ 00:20:27.603 { 00:20:27.603 "method": "accel_set_options", 00:20:27.603 "params": { 00:20:27.603 "small_cache_size": 128, 00:20:27.603 "large_cache_size": 16, 00:20:27.603 "task_count": 2048, 00:20:27.603 "sequence_count": 2048, 00:20:27.604 "buf_count": 2048 00:20:27.604 } 00:20:27.604 } 00:20:27.604 ] 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "subsystem": "bdev", 00:20:27.604 "config": [ 00:20:27.604 { 00:20:27.604 "method": "bdev_set_options", 00:20:27.604 "params": { 00:20:27.604 "bdev_io_pool_size": 65535, 00:20:27.604 "bdev_io_cache_size": 256, 00:20:27.604 "bdev_auto_examine": true, 00:20:27.604 "iobuf_small_cache_size": 128, 00:20:27.604 "iobuf_large_cache_size": 16 00:20:27.604 } 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "method": "bdev_raid_set_options", 00:20:27.604 "params": { 00:20:27.604 "process_window_size_kb": 1024, 00:20:27.604 "process_max_bandwidth_mb_sec": 0 00:20:27.604 } 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "method": "bdev_iscsi_set_options", 00:20:27.604 "params": { 00:20:27.604 "timeout_sec": 30 00:20:27.604 } 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "method": "bdev_nvme_set_options", 00:20:27.604 "params": { 00:20:27.604 "action_on_timeout": "none", 00:20:27.604 "timeout_us": 0, 00:20:27.604 "timeout_admin_us": 0, 00:20:27.604 "keep_alive_timeout_ms": 10000, 00:20:27.604 "arbitration_burst": 0, 00:20:27.604 "low_priority_weight": 0, 00:20:27.604 "medium_priority_weight": 0, 00:20:27.604 "high_priority_weight": 0, 00:20:27.604 "nvme_adminq_poll_period_us": 10000, 00:20:27.604 "nvme_ioq_poll_period_us": 0, 00:20:27.604 "io_queue_requests": 0, 00:20:27.604 "delay_cmd_submit": true, 00:20:27.604 "transport_retry_count": 4, 00:20:27.604 "bdev_retry_count": 3, 00:20:27.604 "transport_ack_timeout": 0, 00:20:27.604 "ctrlr_loss_timeout_sec": 0, 00:20:27.604 "reconnect_delay_sec": 0, 00:20:27.604 "fast_io_fail_timeout_sec": 0, 00:20:27.604 "disable_auto_failback": false, 00:20:27.604 "generate_uuids": false, 00:20:27.604 "transport_tos": 0, 00:20:27.604 "nvme_error_stat": false, 00:20:27.604 "rdma_srq_size": 0, 00:20:27.604 "io_path_stat": false, 00:20:27.604 "allow_accel_sequence": false, 00:20:27.604 "rdma_max_cq_size": 0, 00:20:27.604 "rdma_cm_event_timeout_ms": 0, 00:20:27.604 "dhchap_digests": [ 00:20:27.604 "sha256", 00:20:27.604 "sha384", 00:20:27.604 "sha512" 00:20:27.604 ], 00:20:27.604 "dhchap_dhgroups": [ 00:20:27.604 "null", 00:20:27.604 "ffdhe2048", 00:20:27.604 "ffdhe3072", 00:20:27.604 "ffdhe4096", 00:20:27.604 "ffdhe6144", 00:20:27.604 "ffdhe8192" 00:20:27.604 ] 00:20:27.604 } 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "method": "bdev_nvme_set_hotplug", 00:20:27.604 "params": { 00:20:27.604 "period_us": 100000, 00:20:27.604 "enable": false 00:20:27.604 } 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "method": "bdev_malloc_create", 00:20:27.604 "params": { 00:20:27.604 "name": "malloc0", 00:20:27.604 "num_blocks": 8192, 00:20:27.604 "block_size": 4096, 00:20:27.604 "physical_block_size": 4096, 00:20:27.604 "uuid": "188c18c8-37ef-4c04-8f35-678175ecca93", 00:20:27.604 "optimal_io_boundary": 0, 00:20:27.604 "md_size": 0, 00:20:27.604 "dif_type": 0, 00:20:27.604 "dif_is_head_of_md": false, 00:20:27.604 "dif_pi_format": 0 00:20:27.604 } 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "method": "bdev_wait_for_examine" 00:20:27.604 } 00:20:27.604 ] 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "subsystem": "nbd", 00:20:27.604 "config": [] 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "subsystem": "scheduler", 00:20:27.604 "config": [ 00:20:27.604 { 00:20:27.604 "method": "framework_set_scheduler", 00:20:27.604 "params": { 00:20:27.604 "name": "static" 00:20:27.604 } 00:20:27.604 } 00:20:27.604 ] 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "subsystem": "nvmf", 00:20:27.604 "config": [ 00:20:27.604 { 00:20:27.604 "method": "nvmf_set_config", 00:20:27.604 "params": { 00:20:27.604 "discovery_filter": "match_any", 00:20:27.604 "admin_cmd_passthru": { 00:20:27.604 "identify_ctrlr": false 00:20:27.604 }, 00:20:27.604 "dhchap_digests": [ 00:20:27.604 "sha256", 00:20:27.604 "sha384", 00:20:27.604 "sha512" 00:20:27.604 ], 00:20:27.604 "dhchap_dhgroups": [ 00:20:27.604 "null", 00:20:27.604 "ffdhe2048", 00:20:27.604 "ffdhe3072", 00:20:27.604 "ffdhe4096", 00:20:27.604 "ffdhe6144", 00:20:27.604 "ffdhe8192" 00:20:27.604 ] 00:20:27.604 } 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "method": "nvmf_set_max_subsystems", 00:20:27.604 "params": { 00:20:27.604 "max_subsystems": 1024 00:20:27.604 } 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "method": "nvmf_set_crdt", 00:20:27.604 "params": { 00:20:27.604 "crdt1": 0, 00:20:27.604 "crdt2": 0, 00:20:27.604 "crdt3": 0 00:20:27.604 } 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "method": "nvmf_create_transport", 00:20:27.604 "params": { 00:20:27.604 "trtype": "TCP", 00:20:27.604 "max_queue_depth": 128, 00:20:27.604 "max_io_qpairs_per_ctrlr": 127, 00:20:27.604 "in_capsule_data_size": 4096, 00:20:27.604 "max_io_size": 131072, 00:20:27.604 "io_unit_size": 131072, 00:20:27.604 "max_aq_depth": 128, 00:20:27.604 "num_shared_buffers": 511, 00:20:27.604 "buf_cache_size": 4294967295, 00:20:27.604 "dif_insert_or_strip": false, 00:20:27.604 "zcopy": false, 00:20:27.604 "c2h_success": false, 00:20:27.604 "sock_priority": 0, 00:20:27.604 "abort_timeout_sec": 1, 00:20:27.604 "ack_timeout": 0, 00:20:27.604 "data_wr_pool_size": 0 00:20:27.604 } 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "method": "nvmf_create_subsystem", 00:20:27.604 "params": { 00:20:27.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.604 "allow_any_host": false, 00:20:27.604 "serial_number": "SPDK00000000000001", 00:20:27.604 "model_number": "SPDK bdev Controller", 00:20:27.604 "max_namespaces": 10, 00:20:27.604 "min_cntlid": 1, 00:20:27.604 "max_cntlid": 65519, 00:20:27.604 "ana_reporting": false 00:20:27.604 } 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "method": "nvmf_subsystem_add_host", 00:20:27.604 "params": { 00:20:27.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.604 "host": "nqn.2016-06.io.spdk:host1", 00:20:27.604 "psk": "key0" 00:20:27.604 } 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "method": "nvmf_subsystem_add_ns", 00:20:27.604 "params": { 00:20:27.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.604 "namespace": { 00:20:27.604 "nsid": 1, 00:20:27.604 "bdev_name": "malloc0", 00:20:27.604 "nguid": "188C18C837EF4C048F35678175ECCA93", 00:20:27.604 "uuid": "188c18c8-37ef-4c04-8f35-678175ecca93", 00:20:27.604 "no_auto_visible": false 00:20:27.604 } 00:20:27.604 } 00:20:27.604 }, 00:20:27.604 { 00:20:27.604 "method": "nvmf_subsystem_add_listener", 00:20:27.604 "params": { 00:20:27.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.604 "listen_address": { 00:20:27.604 "trtype": "TCP", 00:20:27.604 "adrfam": "IPv4", 00:20:27.604 "traddr": "10.0.0.2", 00:20:27.604 "trsvcid": "4420" 00:20:27.604 }, 00:20:27.604 "secure_channel": true 00:20:27.604 } 00:20:27.604 } 00:20:27.604 ] 00:20:27.604 } 00:20:27.604 ] 00:20:27.604 }' 00:20:27.604 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:27.865 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:27.865 "subsystems": [ 00:20:27.865 { 00:20:27.865 "subsystem": "keyring", 00:20:27.865 "config": [ 00:20:27.865 { 00:20:27.865 "method": "keyring_file_add_key", 00:20:27.865 "params": { 00:20:27.865 "name": "key0", 00:20:27.865 "path": "/tmp/tmp.5ABiPJtRFp" 00:20:27.865 } 00:20:27.865 } 00:20:27.865 ] 00:20:27.865 }, 00:20:27.865 { 00:20:27.865 "subsystem": "iobuf", 00:20:27.865 "config": [ 00:20:27.865 { 00:20:27.865 "method": "iobuf_set_options", 00:20:27.865 "params": { 00:20:27.865 "small_pool_count": 8192, 00:20:27.865 "large_pool_count": 1024, 00:20:27.865 "small_bufsize": 8192, 00:20:27.865 "large_bufsize": 135168, 00:20:27.865 "enable_numa": false 00:20:27.865 } 00:20:27.865 } 00:20:27.865 ] 00:20:27.865 }, 00:20:27.865 { 00:20:27.865 "subsystem": "sock", 00:20:27.865 "config": [ 00:20:27.865 { 00:20:27.865 "method": "sock_set_default_impl", 00:20:27.865 "params": { 00:20:27.865 "impl_name": "posix" 00:20:27.865 } 00:20:27.865 }, 00:20:27.865 { 00:20:27.865 "method": "sock_impl_set_options", 00:20:27.865 "params": { 00:20:27.865 "impl_name": "ssl", 00:20:27.865 "recv_buf_size": 4096, 00:20:27.865 "send_buf_size": 4096, 00:20:27.865 "enable_recv_pipe": true, 00:20:27.865 "enable_quickack": false, 00:20:27.865 "enable_placement_id": 0, 00:20:27.865 "enable_zerocopy_send_server": true, 00:20:27.865 "enable_zerocopy_send_client": false, 00:20:27.865 "zerocopy_threshold": 0, 00:20:27.865 "tls_version": 0, 00:20:27.865 "enable_ktls": false 00:20:27.865 } 00:20:27.865 }, 00:20:27.865 { 00:20:27.865 "method": "sock_impl_set_options", 00:20:27.865 "params": { 00:20:27.865 "impl_name": "posix", 00:20:27.865 "recv_buf_size": 2097152, 00:20:27.865 "send_buf_size": 2097152, 00:20:27.865 "enable_recv_pipe": true, 00:20:27.865 "enable_quickack": false, 00:20:27.865 "enable_placement_id": 0, 00:20:27.865 "enable_zerocopy_send_server": true, 00:20:27.866 "enable_zerocopy_send_client": false, 00:20:27.866 "zerocopy_threshold": 0, 00:20:27.866 "tls_version": 0, 00:20:27.866 "enable_ktls": false 00:20:27.866 } 00:20:27.866 } 00:20:27.866 ] 00:20:27.866 }, 00:20:27.866 { 00:20:27.866 "subsystem": "vmd", 00:20:27.866 "config": [] 00:20:27.866 }, 00:20:27.866 { 00:20:27.866 "subsystem": "accel", 00:20:27.866 "config": [ 00:20:27.866 { 00:20:27.866 "method": "accel_set_options", 00:20:27.866 "params": { 00:20:27.866 "small_cache_size": 128, 00:20:27.866 "large_cache_size": 16, 00:20:27.866 "task_count": 2048, 00:20:27.866 "sequence_count": 2048, 00:20:27.866 "buf_count": 2048 00:20:27.866 } 00:20:27.866 } 00:20:27.866 ] 00:20:27.866 }, 00:20:27.866 { 00:20:27.866 "subsystem": "bdev", 00:20:27.866 "config": [ 00:20:27.866 { 00:20:27.866 "method": "bdev_set_options", 00:20:27.866 "params": { 00:20:27.866 "bdev_io_pool_size": 65535, 00:20:27.866 "bdev_io_cache_size": 256, 00:20:27.866 "bdev_auto_examine": true, 00:20:27.866 "iobuf_small_cache_size": 128, 00:20:27.866 "iobuf_large_cache_size": 16 00:20:27.866 } 00:20:27.866 }, 00:20:27.866 { 00:20:27.866 "method": "bdev_raid_set_options", 00:20:27.866 "params": { 00:20:27.866 "process_window_size_kb": 1024, 00:20:27.866 "process_max_bandwidth_mb_sec": 0 00:20:27.866 } 00:20:27.866 }, 00:20:27.866 { 00:20:27.866 "method": "bdev_iscsi_set_options", 00:20:27.866 "params": { 00:20:27.866 "timeout_sec": 30 00:20:27.866 } 00:20:27.866 }, 00:20:27.866 { 00:20:27.866 "method": "bdev_nvme_set_options", 00:20:27.866 "params": { 00:20:27.866 "action_on_timeout": "none", 00:20:27.866 "timeout_us": 0, 00:20:27.866 "timeout_admin_us": 0, 00:20:27.866 "keep_alive_timeout_ms": 10000, 00:20:27.866 "arbitration_burst": 0, 00:20:27.866 "low_priority_weight": 0, 00:20:27.866 "medium_priority_weight": 0, 00:20:27.866 "high_priority_weight": 0, 00:20:27.866 "nvme_adminq_poll_period_us": 10000, 00:20:27.866 "nvme_ioq_poll_period_us": 0, 00:20:27.866 "io_queue_requests": 512, 00:20:27.866 "delay_cmd_submit": true, 00:20:27.866 "transport_retry_count": 4, 00:20:27.866 "bdev_retry_count": 3, 00:20:27.866 "transport_ack_timeout": 0, 00:20:27.866 "ctrlr_loss_timeout_sec": 0, 00:20:27.866 "reconnect_delay_sec": 0, 00:20:27.866 "fast_io_fail_timeout_sec": 0, 00:20:27.866 "disable_auto_failback": false, 00:20:27.866 "generate_uuids": false, 00:20:27.866 "transport_tos": 0, 00:20:27.866 "nvme_error_stat": false, 00:20:27.866 "rdma_srq_size": 0, 00:20:27.866 "io_path_stat": false, 00:20:27.866 "allow_accel_sequence": false, 00:20:27.866 "rdma_max_cq_size": 0, 00:20:27.866 "rdma_cm_event_timeout_ms": 0, 00:20:27.866 "dhchap_digests": [ 00:20:27.866 "sha256", 00:20:27.866 "sha384", 00:20:27.866 "sha512" 00:20:27.866 ], 00:20:27.866 "dhchap_dhgroups": [ 00:20:27.866 "null", 00:20:27.866 "ffdhe2048", 00:20:27.866 "ffdhe3072", 00:20:27.866 "ffdhe4096", 00:20:27.866 "ffdhe6144", 00:20:27.866 "ffdhe8192" 00:20:27.866 ] 00:20:27.866 } 00:20:27.866 }, 00:20:27.866 { 00:20:27.866 "method": "bdev_nvme_attach_controller", 00:20:27.866 "params": { 00:20:27.866 "name": "TLSTEST", 00:20:27.866 "trtype": "TCP", 00:20:27.866 "adrfam": "IPv4", 00:20:27.866 "traddr": "10.0.0.2", 00:20:27.866 "trsvcid": "4420", 00:20:27.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.866 "prchk_reftag": false, 00:20:27.866 "prchk_guard": false, 00:20:27.866 "ctrlr_loss_timeout_sec": 0, 00:20:27.866 "reconnect_delay_sec": 0, 00:20:27.866 "fast_io_fail_timeout_sec": 0, 00:20:27.866 "psk": "key0", 00:20:27.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.866 "hdgst": false, 00:20:27.866 "ddgst": false, 00:20:27.866 "multipath": "multipath" 00:20:27.866 } 00:20:27.866 }, 00:20:27.866 { 00:20:27.866 "method": "bdev_nvme_set_hotplug", 00:20:27.866 "params": { 00:20:27.866 "period_us": 100000, 00:20:27.866 "enable": false 00:20:27.866 } 00:20:27.866 }, 00:20:27.866 { 00:20:27.866 "method": "bdev_wait_for_examine" 00:20:27.866 } 00:20:27.866 ] 00:20:27.866 }, 00:20:27.866 { 00:20:27.866 "subsystem": "nbd", 00:20:27.866 "config": [] 00:20:27.866 } 00:20:27.866 ] 00:20:27.866 }' 00:20:27.866 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1304910 00:20:27.866 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1304910 ']' 00:20:27.866 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1304910 00:20:27.866 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:27.866 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:27.866 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1304910 00:20:27.866 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:27.866 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:27.866 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1304910' 00:20:27.866 killing process with pid 1304910 00:20:27.866 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1304910 00:20:27.866 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.866 00:20:27.866 Latency(us) 00:20:27.866 [2024-11-20T06:21:02.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.866 [2024-11-20T06:21:02.633Z] =================================================================================================================== 00:20:27.866 [2024-11-20T06:21:02.633Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:27.866 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1304910 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1304502 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1304502 ']' 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1304502 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1304502 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1304502' 00:20:28.128 killing process with pid 1304502 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1304502 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1304502 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.128 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:28.128 "subsystems": [ 00:20:28.128 { 00:20:28.128 "subsystem": "keyring", 00:20:28.128 "config": [ 00:20:28.128 { 00:20:28.128 "method": "keyring_file_add_key", 00:20:28.128 "params": { 00:20:28.128 "name": "key0", 00:20:28.128 "path": "/tmp/tmp.5ABiPJtRFp" 00:20:28.128 } 00:20:28.128 } 00:20:28.128 ] 00:20:28.128 }, 00:20:28.128 { 00:20:28.128 "subsystem": "iobuf", 00:20:28.128 "config": [ 00:20:28.128 { 00:20:28.128 "method": "iobuf_set_options", 00:20:28.128 "params": { 00:20:28.128 "small_pool_count": 8192, 00:20:28.128 "large_pool_count": 1024, 00:20:28.128 "small_bufsize": 8192, 00:20:28.128 "large_bufsize": 135168, 00:20:28.128 "enable_numa": false 00:20:28.128 } 00:20:28.128 } 00:20:28.128 ] 00:20:28.128 }, 00:20:28.128 { 00:20:28.128 "subsystem": "sock", 00:20:28.128 "config": [ 00:20:28.128 { 00:20:28.128 "method": "sock_set_default_impl", 00:20:28.128 "params": { 00:20:28.128 "impl_name": "posix" 00:20:28.128 } 00:20:28.128 }, 00:20:28.128 { 00:20:28.128 "method": "sock_impl_set_options", 00:20:28.128 "params": { 00:20:28.128 "impl_name": "ssl", 00:20:28.128 "recv_buf_size": 4096, 00:20:28.128 "send_buf_size": 4096, 00:20:28.128 "enable_recv_pipe": true, 00:20:28.128 "enable_quickack": false, 00:20:28.128 "enable_placement_id": 0, 00:20:28.128 "enable_zerocopy_send_server": true, 00:20:28.128 "enable_zerocopy_send_client": false, 00:20:28.128 "zerocopy_threshold": 0, 00:20:28.128 "tls_version": 0, 00:20:28.128 "enable_ktls": false 00:20:28.128 } 00:20:28.128 }, 00:20:28.128 { 00:20:28.128 "method": "sock_impl_set_options", 00:20:28.128 "params": { 00:20:28.128 "impl_name": "posix", 00:20:28.128 "recv_buf_size": 2097152, 00:20:28.128 "send_buf_size": 2097152, 00:20:28.128 "enable_recv_pipe": true, 00:20:28.128 "enable_quickack": false, 00:20:28.128 "enable_placement_id": 0, 00:20:28.128 "enable_zerocopy_send_server": true, 00:20:28.128 "enable_zerocopy_send_client": false, 00:20:28.128 "zerocopy_threshold": 0, 00:20:28.128 "tls_version": 0, 00:20:28.128 "enable_ktls": false 00:20:28.128 } 00:20:28.128 } 00:20:28.128 ] 00:20:28.128 }, 00:20:28.128 { 00:20:28.128 "subsystem": "vmd", 00:20:28.128 "config": [] 00:20:28.128 }, 00:20:28.128 { 00:20:28.128 "subsystem": "accel", 00:20:28.128 "config": [ 00:20:28.128 { 00:20:28.128 "method": "accel_set_options", 00:20:28.128 "params": { 00:20:28.128 "small_cache_size": 128, 00:20:28.128 "large_cache_size": 16, 00:20:28.128 "task_count": 2048, 00:20:28.128 "sequence_count": 2048, 00:20:28.128 "buf_count": 2048 00:20:28.128 } 00:20:28.128 } 00:20:28.128 ] 00:20:28.128 }, 00:20:28.128 { 00:20:28.128 "subsystem": "bdev", 00:20:28.128 "config": [ 00:20:28.128 { 00:20:28.128 "method": "bdev_set_options", 00:20:28.128 "params": { 00:20:28.128 "bdev_io_pool_size": 65535, 00:20:28.128 "bdev_io_cache_size": 256, 00:20:28.128 "bdev_auto_examine": true, 00:20:28.128 "iobuf_small_cache_size": 128, 00:20:28.128 "iobuf_large_cache_size": 16 00:20:28.128 } 00:20:28.128 }, 00:20:28.128 { 00:20:28.128 "method": "bdev_raid_set_options", 00:20:28.128 "params": { 00:20:28.128 "process_window_size_kb": 1024, 00:20:28.128 "process_max_bandwidth_mb_sec": 0 00:20:28.128 } 00:20:28.128 }, 00:20:28.128 { 00:20:28.128 "method": "bdev_iscsi_set_options", 00:20:28.128 "params": { 00:20:28.128 "timeout_sec": 30 00:20:28.128 } 00:20:28.128 }, 00:20:28.128 { 00:20:28.128 "method": "bdev_nvme_set_options", 00:20:28.128 "params": { 00:20:28.128 "action_on_timeout": "none", 00:20:28.128 "timeout_us": 0, 00:20:28.128 "timeout_admin_us": 0, 00:20:28.128 "keep_alive_timeout_ms": 10000, 00:20:28.128 "arbitration_burst": 0, 00:20:28.128 "low_priority_weight": 0, 00:20:28.128 "medium_priority_weight": 0, 00:20:28.128 "high_priority_weight": 0, 00:20:28.128 "nvme_adminq_poll_period_us": 10000, 00:20:28.128 "nvme_ioq_poll_period_us": 0, 00:20:28.128 "io_queue_requests": 0, 00:20:28.128 "delay_cmd_submit": true, 00:20:28.128 "transport_retry_count": 4, 00:20:28.128 "bdev_retry_count": 3, 00:20:28.128 "transport_ack_timeout": 0, 00:20:28.128 "ctrlr_loss_timeout_sec": 0, 00:20:28.128 "reconnect_delay_sec": 0, 00:20:28.128 "fast_io_fail_timeout_sec": 0, 00:20:28.128 "disable_auto_failback": false, 00:20:28.128 "generate_uuids": false, 00:20:28.128 "transport_tos": 0, 00:20:28.128 "nvme_error_stat": false, 00:20:28.128 "rdma_srq_size": 0, 00:20:28.128 "io_path_stat": false, 00:20:28.128 "allow_accel_sequence": false, 00:20:28.128 "rdma_max_cq_size": 0, 00:20:28.128 "rdma_cm_event_timeout_ms": 0, 00:20:28.128 "dhchap_digests": [ 00:20:28.128 "sha256", 00:20:28.128 "sha384", 00:20:28.128 "sha512" 00:20:28.128 ], 00:20:28.128 "dhchap_dhgroups": [ 00:20:28.128 "null", 00:20:28.128 "ffdhe2048", 00:20:28.128 "ffdhe3072", 00:20:28.128 "ffdhe4096", 00:20:28.128 "ffdhe6144", 00:20:28.128 "ffdhe8192" 00:20:28.128 ] 00:20:28.128 } 00:20:28.128 }, 00:20:28.128 { 00:20:28.128 "method": "bdev_nvme_set_hotplug", 00:20:28.128 "params": { 00:20:28.128 "period_us": 100000, 00:20:28.128 "enable": false 00:20:28.128 } 00:20:28.128 }, 00:20:28.128 { 00:20:28.128 "method": "bdev_malloc_create", 00:20:28.128 "params": { 00:20:28.128 "name": "malloc0", 00:20:28.128 "num_blocks": 8192, 00:20:28.128 "block_size": 4096, 00:20:28.128 "physical_block_size": 4096, 00:20:28.129 "uuid": "188c18c8-37ef-4c04-8f35-678175ecca93", 00:20:28.129 "optimal_io_boundary": 0, 00:20:28.129 "md_size": 0, 00:20:28.129 "dif_type": 0, 00:20:28.129 "dif_is_head_of_md": false, 00:20:28.129 "dif_pi_format": 0 00:20:28.129 } 00:20:28.129 }, 00:20:28.129 { 00:20:28.129 "method": "bdev_wait_for_examine" 00:20:28.129 } 00:20:28.129 ] 00:20:28.129 }, 00:20:28.129 { 00:20:28.129 "subsystem": "nbd", 00:20:28.129 "config": [] 00:20:28.129 }, 00:20:28.129 { 00:20:28.129 "subsystem": "scheduler", 00:20:28.129 "config": [ 00:20:28.129 { 00:20:28.129 "method": "framework_set_scheduler", 00:20:28.129 "params": { 00:20:28.129 "name": "static" 00:20:28.129 } 00:20:28.129 } 00:20:28.129 ] 00:20:28.129 }, 00:20:28.129 { 00:20:28.129 "subsystem": "nvmf", 00:20:28.129 "config": [ 00:20:28.129 { 00:20:28.129 "method": "nvmf_set_config", 00:20:28.129 "params": { 00:20:28.129 "discovery_filter": "match_any", 00:20:28.129 "admin_cmd_passthru": { 00:20:28.129 "identify_ctrlr": false 00:20:28.129 }, 00:20:28.129 "dhchap_digests": [ 00:20:28.129 "sha256", 00:20:28.129 "sha384", 00:20:28.129 "sha512" 00:20:28.129 ], 00:20:28.129 "dhchap_dhgroups": [ 00:20:28.129 "null", 00:20:28.129 "ffdhe2048", 00:20:28.129 "ffdhe3072", 00:20:28.129 "ffdhe4096", 00:20:28.129 "ffdhe6144", 00:20:28.129 "ffdhe8192" 00:20:28.129 ] 00:20:28.129 } 00:20:28.129 }, 00:20:28.129 { 00:20:28.129 "method": "nvmf_set_max_subsystems", 00:20:28.129 "params": { 00:20:28.129 "max_subsystems": 1024 00:20:28.129 } 00:20:28.129 }, 00:20:28.129 { 00:20:28.129 "method": "nvmf_set_crdt", 00:20:28.129 "params": { 00:20:28.129 "crdt1": 0, 00:20:28.129 "crdt2": 0, 00:20:28.129 "crdt3": 0 00:20:28.129 } 00:20:28.129 }, 00:20:28.129 { 00:20:28.129 "method": "nvmf_create_transport", 00:20:28.129 "params": { 00:20:28.129 "trtype": "TCP", 00:20:28.129 "max_queue_depth": 128, 00:20:28.129 "max_io_qpairs_per_ctrlr": 127, 00:20:28.129 "in_capsule_data_size": 4096, 00:20:28.129 "max_io_size": 131072, 00:20:28.129 "io_unit_size": 131072, 00:20:28.129 "max_aq_depth": 128, 00:20:28.129 "num_shared_buffers": 511, 00:20:28.129 "buf_cache_size": 4294967295, 00:20:28.129 "dif_insert_or_strip": false, 00:20:28.129 "zcopy": false, 00:20:28.129 "c2h_success": false, 00:20:28.129 "sock_priority": 0, 00:20:28.129 "abort_timeout_sec": 1, 00:20:28.129 "ack_timeout": 0, 00:20:28.129 "data_wr_pool_size": 0 00:20:28.129 } 00:20:28.129 }, 00:20:28.129 { 00:20:28.129 "method": "nvmf_create_subsystem", 00:20:28.129 "params": { 00:20:28.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.129 "allow_any_host": false, 00:20:28.129 "serial_number": "SPDK00000000000001", 00:20:28.129 "model_number": "SPDK bdev Controller", 00:20:28.129 "max_namespaces": 10, 00:20:28.129 "min_cntlid": 1, 00:20:28.129 "max_cntlid": 65519, 00:20:28.129 "ana_reporting": false 00:20:28.129 } 00:20:28.129 }, 00:20:28.129 { 00:20:28.129 "method": "nvmf_subsystem_add_host", 00:20:28.129 "params": { 00:20:28.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.129 "host": "nqn.2016-06.io.spdk:host1", 00:20:28.129 "psk": "key0" 00:20:28.129 } 00:20:28.129 }, 00:20:28.129 { 00:20:28.129 "method": "nvmf_subsystem_add_ns", 00:20:28.129 "params": { 00:20:28.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.129 "namespace": { 00:20:28.129 "nsid": 1, 00:20:28.129 "bdev_name": "malloc0", 00:20:28.129 "nguid": "188C18C837EF4C048F35678175ECCA93", 00:20:28.129 "uuid": "188c18c8-37ef-4c04-8f35-678175ecca93", 00:20:28.129 "no_auto_visible": false 00:20:28.129 } 00:20:28.129 } 00:20:28.129 }, 00:20:28.129 { 00:20:28.129 "method": "nvmf_subsystem_add_listener", 00:20:28.129 "params": { 00:20:28.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.129 "listen_address": { 00:20:28.129 "trtype": "TCP", 00:20:28.129 "adrfam": "IPv4", 00:20:28.129 "traddr": "10.0.0.2", 00:20:28.129 "trsvcid": "4420" 00:20:28.129 }, 00:20:28.129 "secure_channel": true 00:20:28.129 } 00:20:28.129 } 00:20:28.129 ] 00:20:28.129 } 00:20:28.129 ] 00:20:28.129 }' 00:20:28.129 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1305334 00:20:28.129 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1305334 00:20:28.129 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:28.129 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1305334 ']' 00:20:28.129 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.129 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:28.129 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.129 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:28.129 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.129 [2024-11-20 07:21:02.870531] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:28.129 [2024-11-20 07:21:02.870590] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.390 [2024-11-20 07:21:02.967505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.390 [2024-11-20 07:21:02.996987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.390 [2024-11-20 07:21:02.997013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.390 [2024-11-20 07:21:02.997019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.390 [2024-11-20 07:21:02.997024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.390 [2024-11-20 07:21:02.997029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.390 [2024-11-20 07:21:02.997517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.650 [2024-11-20 07:21:03.191368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.650 [2024-11-20 07:21:03.223394] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:28.650 [2024-11-20 07:21:03.223591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.913 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:28.913 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:28.913 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:28.913 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:28.913 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.174 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.174 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1305680 00:20:29.174 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1305680 /var/tmp/bdevperf.sock 00:20:29.174 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1305680 ']' 00:20:29.174 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.174 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:29.174 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.174 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:29.174 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:29.174 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.174 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:29.174 "subsystems": [ 00:20:29.174 { 00:20:29.174 "subsystem": "keyring", 00:20:29.174 "config": [ 00:20:29.174 { 00:20:29.174 "method": "keyring_file_add_key", 00:20:29.174 "params": { 00:20:29.174 "name": "key0", 00:20:29.174 "path": "/tmp/tmp.5ABiPJtRFp" 00:20:29.174 } 00:20:29.174 } 00:20:29.174 ] 00:20:29.174 }, 00:20:29.174 { 00:20:29.174 "subsystem": "iobuf", 00:20:29.174 "config": [ 00:20:29.174 { 00:20:29.174 "method": "iobuf_set_options", 00:20:29.174 "params": { 00:20:29.174 "small_pool_count": 8192, 00:20:29.174 "large_pool_count": 1024, 00:20:29.174 "small_bufsize": 8192, 00:20:29.174 "large_bufsize": 135168, 00:20:29.174 "enable_numa": false 00:20:29.174 } 00:20:29.174 } 00:20:29.174 ] 00:20:29.174 }, 00:20:29.174 { 00:20:29.174 "subsystem": "sock", 00:20:29.174 "config": [ 00:20:29.174 { 00:20:29.174 "method": "sock_set_default_impl", 00:20:29.174 "params": { 00:20:29.175 "impl_name": "posix" 00:20:29.175 } 00:20:29.175 }, 00:20:29.175 { 00:20:29.175 "method": "sock_impl_set_options", 00:20:29.175 "params": { 00:20:29.175 "impl_name": "ssl", 00:20:29.175 "recv_buf_size": 4096, 00:20:29.175 "send_buf_size": 4096, 00:20:29.175 "enable_recv_pipe": true, 00:20:29.175 "enable_quickack": false, 00:20:29.175 "enable_placement_id": 0, 00:20:29.175 "enable_zerocopy_send_server": true, 00:20:29.175 "enable_zerocopy_send_client": false, 00:20:29.175 "zerocopy_threshold": 0, 00:20:29.175 "tls_version": 0, 00:20:29.175 "enable_ktls": false 00:20:29.175 } 00:20:29.175 }, 00:20:29.175 { 00:20:29.175 "method": "sock_impl_set_options", 00:20:29.175 "params": { 00:20:29.175 "impl_name": "posix", 00:20:29.175 "recv_buf_size": 2097152, 00:20:29.175 "send_buf_size": 2097152, 00:20:29.175 "enable_recv_pipe": true, 00:20:29.175 "enable_quickack": false, 00:20:29.175 "enable_placement_id": 0, 00:20:29.175 "enable_zerocopy_send_server": true, 00:20:29.175 "enable_zerocopy_send_client": false, 00:20:29.175 "zerocopy_threshold": 0, 00:20:29.175 "tls_version": 0, 00:20:29.175 "enable_ktls": false 00:20:29.175 } 00:20:29.175 } 00:20:29.175 ] 00:20:29.175 }, 00:20:29.175 { 00:20:29.175 "subsystem": "vmd", 00:20:29.175 "config": [] 00:20:29.175 }, 00:20:29.175 { 00:20:29.175 "subsystem": "accel", 00:20:29.175 "config": [ 00:20:29.175 { 00:20:29.175 "method": "accel_set_options", 00:20:29.175 "params": { 00:20:29.175 "small_cache_size": 128, 00:20:29.175 "large_cache_size": 16, 00:20:29.175 "task_count": 2048, 00:20:29.175 "sequence_count": 2048, 00:20:29.175 "buf_count": 2048 00:20:29.175 } 00:20:29.175 } 00:20:29.175 ] 00:20:29.175 }, 00:20:29.175 { 00:20:29.175 "subsystem": "bdev", 00:20:29.175 "config": [ 00:20:29.175 { 00:20:29.175 "method": "bdev_set_options", 00:20:29.175 "params": { 00:20:29.175 "bdev_io_pool_size": 65535, 00:20:29.175 "bdev_io_cache_size": 256, 00:20:29.175 "bdev_auto_examine": true, 00:20:29.175 "iobuf_small_cache_size": 128, 00:20:29.175 "iobuf_large_cache_size": 16 00:20:29.175 } 00:20:29.175 }, 00:20:29.175 { 00:20:29.175 "method": "bdev_raid_set_options", 00:20:29.175 "params": { 00:20:29.175 "process_window_size_kb": 1024, 00:20:29.175 "process_max_bandwidth_mb_sec": 0 00:20:29.175 } 00:20:29.175 }, 00:20:29.175 { 00:20:29.175 "method": "bdev_iscsi_set_options", 00:20:29.175 "params": { 00:20:29.175 "timeout_sec": 30 00:20:29.175 } 00:20:29.175 }, 00:20:29.175 { 00:20:29.175 "method": "bdev_nvme_set_options", 00:20:29.175 "params": { 00:20:29.175 "action_on_timeout": "none", 00:20:29.175 "timeout_us": 0, 00:20:29.175 "timeout_admin_us": 0, 00:20:29.175 "keep_alive_timeout_ms": 10000, 00:20:29.175 "arbitration_burst": 0, 00:20:29.175 "low_priority_weight": 0, 00:20:29.175 "medium_priority_weight": 0, 00:20:29.175 "high_priority_weight": 0, 00:20:29.175 "nvme_adminq_poll_period_us": 10000, 00:20:29.175 "nvme_ioq_poll_period_us": 0, 00:20:29.175 "io_queue_requests": 512, 00:20:29.175 "delay_cmd_submit": true, 00:20:29.175 "transport_retry_count": 4, 00:20:29.175 "bdev_retry_count": 3, 00:20:29.175 "transport_ack_timeout": 0, 00:20:29.175 "ctrlr_loss_timeout_sec": 0, 00:20:29.175 "reconnect_delay_sec": 0, 00:20:29.175 "fast_io_fail_timeout_sec": 0, 00:20:29.175 "disable_auto_failback": false, 00:20:29.175 "generate_uuids": false, 00:20:29.175 "transport_tos": 0, 00:20:29.175 "nvme_error_stat": false, 00:20:29.175 "rdma_srq_size": 0, 00:20:29.175 "io_path_stat": false, 00:20:29.175 "allow_accel_sequence": false, 00:20:29.175 "rdma_max_cq_size": 0, 00:20:29.175 "rdma_cm_event_timeout_ms": 0, 00:20:29.175 "dhchap_digests": [ 00:20:29.175 "sha256", 00:20:29.175 "sha384", 00:20:29.175 "sha512" 00:20:29.175 ], 00:20:29.175 "dhchap_dhgroups": [ 00:20:29.175 "null", 00:20:29.175 "ffdhe2048", 00:20:29.175 "ffdhe3072", 00:20:29.175 "ffdhe4096", 00:20:29.175 "ffdhe6144", 00:20:29.175 "ffdhe8192" 00:20:29.175 ] 00:20:29.175 } 00:20:29.175 }, 00:20:29.175 { 00:20:29.175 "method": "bdev_nvme_attach_controller", 00:20:29.175 "params": { 00:20:29.175 "name": "TLSTEST", 00:20:29.175 "trtype": "TCP", 00:20:29.175 "adrfam": "IPv4", 00:20:29.175 "traddr": "10.0.0.2", 00:20:29.175 "trsvcid": "4420", 00:20:29.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.175 "prchk_reftag": false, 00:20:29.175 "prchk_guard": false, 00:20:29.175 "ctrlr_loss_timeout_sec": 0, 00:20:29.175 "reconnect_delay_sec": 0, 00:20:29.175 "fast_io_fail_timeout_sec": 0, 00:20:29.175 "psk": "key0", 00:20:29.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.175 "hdgst": false, 00:20:29.175 "ddgst": false, 00:20:29.175 "multipath": "multipath" 00:20:29.175 } 00:20:29.175 }, 00:20:29.175 { 00:20:29.175 "method": "bdev_nvme_set_hotplug", 00:20:29.175 "params": { 00:20:29.175 "period_us": 100000, 00:20:29.175 "enable": false 00:20:29.175 } 00:20:29.175 }, 00:20:29.175 { 00:20:29.175 "method": "bdev_wait_for_examine" 00:20:29.175 } 00:20:29.175 ] 00:20:29.175 }, 00:20:29.175 { 00:20:29.175 "subsystem": "nbd", 00:20:29.175 "config": [] 00:20:29.175 } 00:20:29.175 ] 00:20:29.175 }' 00:20:29.175 [2024-11-20 07:21:03.741805] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:29.175 [2024-11-20 07:21:03.741858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305680 ] 00:20:29.175 [2024-11-20 07:21:03.804719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.175 [2024-11-20 07:21:03.833759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.436 [2024-11-20 07:21:03.969346] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.007 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:30.007 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:30.007 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:30.007 Running I/O for 10 seconds... 00:20:31.890 5823.00 IOPS, 22.75 MiB/s [2024-11-20T06:21:08.039Z] 5862.50 IOPS, 22.90 MiB/s [2024-11-20T06:21:09.004Z] 5943.33 IOPS, 23.22 MiB/s [2024-11-20T06:21:09.944Z] 6017.25 IOPS, 23.50 MiB/s [2024-11-20T06:21:10.997Z] 6088.20 IOPS, 23.78 MiB/s [2024-11-20T06:21:11.961Z] 6118.50 IOPS, 23.90 MiB/s [2024-11-20T06:21:12.905Z] 6115.71 IOPS, 23.89 MiB/s [2024-11-20T06:21:13.846Z] 6112.12 IOPS, 23.88 MiB/s [2024-11-20T06:21:14.788Z] 6098.22 IOPS, 23.82 MiB/s [2024-11-20T06:21:14.788Z] 6090.80 IOPS, 23.79 MiB/s 00:20:40.021 Latency(us) 00:20:40.021 [2024-11-20T06:21:14.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.021 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:40.021 Verification LBA range: start 0x0 length 0x2000 00:20:40.021 TLSTESTn1 : 10.02 6090.25 23.79 0.00 0.00 20979.16 4751.36 24794.45 00:20:40.021 [2024-11-20T06:21:14.788Z] =================================================================================================================== 00:20:40.021 [2024-11-20T06:21:14.788Z] Total : 6090.25 23.79 0.00 0.00 20979.16 4751.36 24794.45 00:20:40.021 { 00:20:40.021 "results": [ 00:20:40.021 { 00:20:40.021 "job": "TLSTESTn1", 00:20:40.021 "core_mask": "0x4", 00:20:40.021 "workload": "verify", 00:20:40.021 "status": "finished", 00:20:40.021 "verify_range": { 00:20:40.021 "start": 0, 00:20:40.021 "length": 8192 00:20:40.021 }, 00:20:40.021 "queue_depth": 128, 00:20:40.021 "io_size": 4096, 00:20:40.021 "runtime": 10.021764, 00:20:40.021 "iops": 6090.245190367684, 00:20:40.021 "mibps": 23.790020274873765, 00:20:40.021 "io_failed": 0, 00:20:40.021 "io_timeout": 0, 00:20:40.021 "avg_latency_us": 20979.157903061085, 00:20:40.021 "min_latency_us": 4751.36, 00:20:40.021 "max_latency_us": 24794.453333333335 00:20:40.021 } 00:20:40.021 ], 00:20:40.021 "core_count": 1 00:20:40.021 } 00:20:40.021 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:40.021 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1305680 00:20:40.021 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1305680 ']' 00:20:40.021 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1305680 00:20:40.021 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:40.022 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:40.022 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1305680 00:20:40.022 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:40.022 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:40.022 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1305680' 00:20:40.022 killing process with pid 1305680 00:20:40.022 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1305680 00:20:40.022 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.022 00:20:40.022 Latency(us) 00:20:40.022 [2024-11-20T06:21:14.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.022 [2024-11-20T06:21:14.789Z] =================================================================================================================== 00:20:40.022 [2024-11-20T06:21:14.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.022 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1305680 00:20:40.282 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1305334 00:20:40.283 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1305334 ']' 00:20:40.283 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1305334 00:20:40.283 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:40.283 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:40.283 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1305334 00:20:40.283 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:40.283 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:40.283 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1305334' 00:20:40.283 killing process with pid 1305334 00:20:40.283 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1305334 00:20:40.283 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1305334 00:20:40.283 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:40.283 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:40.283 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:40.283 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.543 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1308155 00:20:40.543 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1308155 00:20:40.543 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:40.543 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1308155 ']' 00:20:40.543 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.543 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:40.543 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.543 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:40.543 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.543 [2024-11-20 07:21:15.101846] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:40.543 [2024-11-20 07:21:15.101913] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.543 [2024-11-20 07:21:15.186587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.543 [2024-11-20 07:21:15.222338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.543 [2024-11-20 07:21:15.222371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.544 [2024-11-20 07:21:15.222380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.544 [2024-11-20 07:21:15.222388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.544 [2024-11-20 07:21:15.222393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.544 [2024-11-20 07:21:15.222961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.484 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:41.484 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:41.484 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:41.484 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:41.484 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.484 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.484 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.5ABiPJtRFp 00:20:41.484 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5ABiPJtRFp 00:20:41.484 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:41.484 [2024-11-20 07:21:16.076352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.484 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:41.485 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:41.745 [2024-11-20 07:21:16.397145] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.745 [2024-11-20 07:21:16.397369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.745 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:42.006 malloc0 00:20:42.006 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:42.006 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5ABiPJtRFp 00:20:42.266 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:42.527 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1308596 00:20:42.527 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:42.527 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:42.527 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1308596 /var/tmp/bdevperf.sock 00:20:42.527 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1308596 ']' 00:20:42.527 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.527 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:42.527 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.527 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:42.527 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.527 [2024-11-20 07:21:17.122390] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:42.527 [2024-11-20 07:21:17.122444] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308596 ] 00:20:42.527 [2024-11-20 07:21:17.211415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.527 [2024-11-20 07:21:17.241245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.467 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:43.467 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:43.467 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5ABiPJtRFp 00:20:43.467 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:43.727 [2024-11-20 07:21:18.246206] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.727 nvme0n1 00:20:43.727 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:43.727 Running I/O for 1 seconds... 00:20:44.927 4184.00 IOPS, 16.34 MiB/s 00:20:44.927 Latency(us) 00:20:44.927 [2024-11-20T06:21:19.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.927 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:44.927 Verification LBA range: start 0x0 length 0x2000 00:20:44.927 nvme0n1 : 1.02 4246.31 16.59 0.00 0.00 29942.62 5515.95 59856.21 00:20:44.927 [2024-11-20T06:21:19.694Z] =================================================================================================================== 00:20:44.927 [2024-11-20T06:21:19.694Z] Total : 4246.31 16.59 0.00 0.00 29942.62 5515.95 59856.21 00:20:44.927 { 00:20:44.927 "results": [ 00:20:44.927 { 00:20:44.927 "job": "nvme0n1", 00:20:44.927 "core_mask": "0x2", 00:20:44.927 "workload": "verify", 00:20:44.927 "status": "finished", 00:20:44.927 "verify_range": { 00:20:44.927 "start": 0, 00:20:44.927 "length": 8192 00:20:44.927 }, 00:20:44.927 "queue_depth": 128, 00:20:44.927 "io_size": 4096, 00:20:44.927 "runtime": 1.015705, 00:20:44.927 "iops": 4246.311675141897, 00:20:44.927 "mibps": 16.587154981023033, 00:20:44.927 "io_failed": 0, 00:20:44.927 "io_timeout": 0, 00:20:44.927 "avg_latency_us": 29942.617883916835, 00:20:44.927 "min_latency_us": 5515.946666666667, 00:20:44.927 "max_latency_us": 59856.21333333333 00:20:44.927 } 00:20:44.927 ], 00:20:44.928 "core_count": 1 00:20:44.928 } 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1308596 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1308596 ']' 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1308596 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1308596 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1308596' 00:20:44.928 killing process with pid 1308596 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1308596 00:20:44.928 Received shutdown signal, test time was about 1.000000 seconds 00:20:44.928 00:20:44.928 Latency(us) 00:20:44.928 [2024-11-20T06:21:19.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.928 [2024-11-20T06:21:19.695Z] =================================================================================================================== 00:20:44.928 [2024-11-20T06:21:19.695Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1308596 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1308155 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1308155 ']' 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1308155 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:44.928 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1308155 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1308155' 00:20:45.188 killing process with pid 1308155 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1308155 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1308155 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1309202 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1309202 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1309202 ']' 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:45.188 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.188 [2024-11-20 07:21:19.884909] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:45.189 [2024-11-20 07:21:19.884962] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.450 [2024-11-20 07:21:19.970092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.450 [2024-11-20 07:21:20.003137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.450 [2024-11-20 07:21:20.003174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.450 [2024-11-20 07:21:20.003181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.450 [2024-11-20 07:21:20.003188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.450 [2024-11-20 07:21:20.003194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.450 [2024-11-20 07:21:20.003678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.021 [2024-11-20 07:21:20.726210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.021 malloc0 00:20:46.021 [2024-11-20 07:21:20.752940] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:46.021 [2024-11-20 07:21:20.753170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1309405 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1309405 /var/tmp/bdevperf.sock 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1309405 ']' 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:46.021 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.281 [2024-11-20 07:21:20.832260] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:46.281 [2024-11-20 07:21:20.832313] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309405 ] 00:20:46.281 [2024-11-20 07:21:20.920767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.282 [2024-11-20 07:21:20.950752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.852 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:46.852 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:46.852 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5ABiPJtRFp 00:20:47.113 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:47.374 [2024-11-20 07:21:21.951644] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.374 nvme0n1 00:20:47.374 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:47.374 Running I/O for 1 seconds... 00:20:48.758 4232.00 IOPS, 16.53 MiB/s 00:20:48.758 Latency(us) 00:20:48.758 [2024-11-20T06:21:23.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.758 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:48.758 Verification LBA range: start 0x0 length 0x2000 00:20:48.758 nvme0n1 : 1.03 4226.02 16.51 0.00 0.00 29914.44 5843.63 29928.11 00:20:48.758 [2024-11-20T06:21:23.525Z] =================================================================================================================== 00:20:48.758 [2024-11-20T06:21:23.525Z] Total : 4226.02 16.51 0.00 0.00 29914.44 5843.63 29928.11 00:20:48.758 { 00:20:48.758 "results": [ 00:20:48.758 { 00:20:48.758 "job": "nvme0n1", 00:20:48.758 "core_mask": "0x2", 00:20:48.758 "workload": "verify", 00:20:48.758 "status": "finished", 00:20:48.758 "verify_range": { 00:20:48.758 "start": 0, 00:20:48.758 "length": 8192 00:20:48.758 }, 00:20:48.758 "queue_depth": 128, 00:20:48.758 "io_size": 4096, 00:20:48.758 "runtime": 1.031704, 00:20:48.758 "iops": 4226.0183153307535, 00:20:48.758 "mibps": 16.507884044260756, 00:20:48.758 "io_failed": 0, 00:20:48.758 "io_timeout": 0, 00:20:48.758 "avg_latency_us": 29914.44393883792, 00:20:48.758 "min_latency_us": 5843.626666666667, 00:20:48.758 "max_latency_us": 29928.106666666667 00:20:48.758 } 00:20:48.758 ], 00:20:48.758 "core_count": 1 00:20:48.758 } 00:20:48.758 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:48.758 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.758 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.758 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.758 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:48.758 "subsystems": [ 00:20:48.758 { 00:20:48.758 "subsystem": "keyring", 00:20:48.758 "config": [ 00:20:48.758 { 00:20:48.758 "method": "keyring_file_add_key", 00:20:48.758 "params": { 00:20:48.758 "name": "key0", 00:20:48.758 "path": "/tmp/tmp.5ABiPJtRFp" 00:20:48.758 } 00:20:48.758 } 00:20:48.758 ] 00:20:48.758 }, 00:20:48.758 { 00:20:48.758 "subsystem": "iobuf", 00:20:48.758 "config": [ 00:20:48.758 { 00:20:48.758 "method": "iobuf_set_options", 00:20:48.758 "params": { 00:20:48.758 "small_pool_count": 8192, 00:20:48.758 "large_pool_count": 1024, 00:20:48.758 "small_bufsize": 8192, 00:20:48.758 "large_bufsize": 135168, 00:20:48.758 "enable_numa": false 00:20:48.758 } 00:20:48.758 } 00:20:48.758 ] 00:20:48.758 }, 00:20:48.759 { 00:20:48.759 "subsystem": "sock", 00:20:48.759 "config": [ 00:20:48.759 { 00:20:48.759 "method": "sock_set_default_impl", 00:20:48.759 "params": { 00:20:48.759 "impl_name": "posix" 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "sock_impl_set_options", 00:20:48.759 "params": { 00:20:48.759 "impl_name": "ssl", 00:20:48.759 "recv_buf_size": 4096, 00:20:48.759 "send_buf_size": 4096, 00:20:48.759 "enable_recv_pipe": true, 00:20:48.759 "enable_quickack": false, 00:20:48.759 "enable_placement_id": 0, 00:20:48.759 "enable_zerocopy_send_server": true, 00:20:48.759 "enable_zerocopy_send_client": false, 00:20:48.759 "zerocopy_threshold": 0, 00:20:48.759 "tls_version": 0, 00:20:48.759 "enable_ktls": false 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "sock_impl_set_options", 00:20:48.759 "params": { 00:20:48.759 "impl_name": "posix", 00:20:48.759 "recv_buf_size": 2097152, 00:20:48.759 "send_buf_size": 2097152, 00:20:48.759 "enable_recv_pipe": true, 00:20:48.759 "enable_quickack": false, 00:20:48.759 "enable_placement_id": 0, 00:20:48.759 "enable_zerocopy_send_server": true, 00:20:48.759 "enable_zerocopy_send_client": false, 00:20:48.759 "zerocopy_threshold": 0, 00:20:48.759 "tls_version": 0, 00:20:48.759 "enable_ktls": false 00:20:48.759 } 00:20:48.759 } 00:20:48.759 ] 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "subsystem": "vmd", 00:20:48.759 "config": [] 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "subsystem": "accel", 00:20:48.759 "config": [ 00:20:48.759 { 00:20:48.759 "method": "accel_set_options", 00:20:48.759 "params": { 00:20:48.759 "small_cache_size": 128, 00:20:48.759 "large_cache_size": 16, 00:20:48.759 "task_count": 2048, 00:20:48.759 "sequence_count": 2048, 00:20:48.759 "buf_count": 2048 00:20:48.759 } 00:20:48.759 } 00:20:48.759 ] 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "subsystem": "bdev", 00:20:48.759 "config": [ 00:20:48.759 { 00:20:48.759 "method": "bdev_set_options", 00:20:48.759 "params": { 00:20:48.759 "bdev_io_pool_size": 65535, 00:20:48.759 "bdev_io_cache_size": 256, 00:20:48.759 "bdev_auto_examine": true, 00:20:48.759 "iobuf_small_cache_size": 128, 00:20:48.759 "iobuf_large_cache_size": 16 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "bdev_raid_set_options", 00:20:48.759 "params": { 00:20:48.759 "process_window_size_kb": 1024, 00:20:48.759 "process_max_bandwidth_mb_sec": 0 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "bdev_iscsi_set_options", 00:20:48.759 "params": { 00:20:48.759 "timeout_sec": 30 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "bdev_nvme_set_options", 00:20:48.759 "params": { 00:20:48.759 "action_on_timeout": "none", 00:20:48.759 "timeout_us": 0, 00:20:48.759 "timeout_admin_us": 0, 00:20:48.759 "keep_alive_timeout_ms": 10000, 00:20:48.759 "arbitration_burst": 0, 00:20:48.759 "low_priority_weight": 0, 00:20:48.759 "medium_priority_weight": 0, 00:20:48.759 "high_priority_weight": 0, 00:20:48.759 "nvme_adminq_poll_period_us": 10000, 00:20:48.759 "nvme_ioq_poll_period_us": 0, 00:20:48.759 "io_queue_requests": 0, 00:20:48.759 "delay_cmd_submit": true, 00:20:48.759 "transport_retry_count": 4, 00:20:48.759 "bdev_retry_count": 3, 00:20:48.759 "transport_ack_timeout": 0, 00:20:48.759 "ctrlr_loss_timeout_sec": 0, 00:20:48.759 "reconnect_delay_sec": 0, 00:20:48.759 "fast_io_fail_timeout_sec": 0, 00:20:48.759 "disable_auto_failback": false, 00:20:48.759 "generate_uuids": false, 00:20:48.759 "transport_tos": 0, 00:20:48.759 "nvme_error_stat": false, 00:20:48.759 "rdma_srq_size": 0, 00:20:48.759 "io_path_stat": false, 00:20:48.759 "allow_accel_sequence": false, 00:20:48.759 "rdma_max_cq_size": 0, 00:20:48.759 "rdma_cm_event_timeout_ms": 0, 00:20:48.759 "dhchap_digests": [ 00:20:48.759 "sha256", 00:20:48.759 "sha384", 00:20:48.759 "sha512" 00:20:48.759 ], 00:20:48.759 "dhchap_dhgroups": [ 00:20:48.759 "null", 00:20:48.759 "ffdhe2048", 00:20:48.759 "ffdhe3072", 00:20:48.759 "ffdhe4096", 00:20:48.759 "ffdhe6144", 00:20:48.759 "ffdhe8192" 00:20:48.759 ] 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "bdev_nvme_set_hotplug", 00:20:48.759 "params": { 00:20:48.759 "period_us": 100000, 00:20:48.759 "enable": false 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "bdev_malloc_create", 00:20:48.759 "params": { 00:20:48.759 "name": "malloc0", 00:20:48.759 "num_blocks": 8192, 00:20:48.759 "block_size": 4096, 00:20:48.759 "physical_block_size": 4096, 00:20:48.759 "uuid": "cb089317-fe3b-43d6-b20d-68e960b7a0af", 00:20:48.759 "optimal_io_boundary": 0, 00:20:48.759 "md_size": 0, 00:20:48.759 "dif_type": 0, 00:20:48.759 "dif_is_head_of_md": false, 00:20:48.759 "dif_pi_format": 0 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "bdev_wait_for_examine" 00:20:48.759 } 00:20:48.759 ] 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "subsystem": "nbd", 00:20:48.759 "config": [] 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "subsystem": "scheduler", 00:20:48.759 "config": [ 00:20:48.759 { 00:20:48.759 "method": "framework_set_scheduler", 00:20:48.759 "params": { 00:20:48.759 "name": "static" 00:20:48.759 } 00:20:48.759 } 00:20:48.759 ] 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "subsystem": "nvmf", 00:20:48.759 "config": [ 00:20:48.759 { 00:20:48.759 "method": "nvmf_set_config", 00:20:48.759 "params": { 00:20:48.759 "discovery_filter": "match_any", 00:20:48.759 "admin_cmd_passthru": { 00:20:48.759 "identify_ctrlr": false 00:20:48.759 }, 00:20:48.759 "dhchap_digests": [ 00:20:48.759 "sha256", 00:20:48.759 "sha384", 00:20:48.759 "sha512" 00:20:48.759 ], 00:20:48.759 "dhchap_dhgroups": [ 00:20:48.759 "null", 00:20:48.759 "ffdhe2048", 00:20:48.759 "ffdhe3072", 00:20:48.759 "ffdhe4096", 00:20:48.759 "ffdhe6144", 00:20:48.759 "ffdhe8192" 00:20:48.759 ] 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "nvmf_set_max_subsystems", 00:20:48.759 "params": { 00:20:48.759 "max_subsystems": 1024 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "nvmf_set_crdt", 00:20:48.759 "params": { 00:20:48.759 "crdt1": 0, 00:20:48.759 "crdt2": 0, 00:20:48.759 "crdt3": 0 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "nvmf_create_transport", 00:20:48.759 "params": { 00:20:48.759 "trtype": "TCP", 00:20:48.759 "max_queue_depth": 128, 00:20:48.759 "max_io_qpairs_per_ctrlr": 127, 00:20:48.759 "in_capsule_data_size": 4096, 00:20:48.759 "max_io_size": 131072, 00:20:48.759 "io_unit_size": 131072, 00:20:48.759 "max_aq_depth": 128, 00:20:48.759 "num_shared_buffers": 511, 00:20:48.759 "buf_cache_size": 4294967295, 00:20:48.759 "dif_insert_or_strip": false, 00:20:48.759 "zcopy": false, 00:20:48.759 "c2h_success": false, 00:20:48.759 "sock_priority": 0, 00:20:48.759 "abort_timeout_sec": 1, 00:20:48.759 "ack_timeout": 0, 00:20:48.759 "data_wr_pool_size": 0 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "nvmf_create_subsystem", 00:20:48.759 "params": { 00:20:48.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.759 "allow_any_host": false, 00:20:48.759 "serial_number": "00000000000000000000", 00:20:48.759 "model_number": "SPDK bdev Controller", 00:20:48.759 "max_namespaces": 32, 00:20:48.759 "min_cntlid": 1, 00:20:48.759 "max_cntlid": 65519, 00:20:48.759 "ana_reporting": false 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "nvmf_subsystem_add_host", 00:20:48.759 "params": { 00:20:48.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.759 "host": "nqn.2016-06.io.spdk:host1", 00:20:48.759 "psk": "key0" 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "nvmf_subsystem_add_ns", 00:20:48.759 "params": { 00:20:48.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.759 "namespace": { 00:20:48.759 "nsid": 1, 00:20:48.759 "bdev_name": "malloc0", 00:20:48.759 "nguid": "CB089317FE3B43D6B20D68E960B7A0AF", 00:20:48.759 "uuid": "cb089317-fe3b-43d6-b20d-68e960b7a0af", 00:20:48.759 "no_auto_visible": false 00:20:48.759 } 00:20:48.759 } 00:20:48.759 }, 00:20:48.759 { 00:20:48.759 "method": "nvmf_subsystem_add_listener", 00:20:48.759 "params": { 00:20:48.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.759 "listen_address": { 00:20:48.759 "trtype": "TCP", 00:20:48.759 "adrfam": "IPv4", 00:20:48.759 "traddr": "10.0.0.2", 00:20:48.759 "trsvcid": "4420" 00:20:48.759 }, 00:20:48.759 "secure_channel": false, 00:20:48.759 "sock_impl": "ssl" 00:20:48.759 } 00:20:48.759 } 00:20:48.759 ] 00:20:48.759 } 00:20:48.759 ] 00:20:48.759 }' 00:20:48.760 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:49.022 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:49.022 "subsystems": [ 00:20:49.022 { 00:20:49.022 "subsystem": "keyring", 00:20:49.022 "config": [ 00:20:49.022 { 00:20:49.022 "method": "keyring_file_add_key", 00:20:49.022 "params": { 00:20:49.022 "name": "key0", 00:20:49.022 "path": "/tmp/tmp.5ABiPJtRFp" 00:20:49.022 } 00:20:49.022 } 00:20:49.022 ] 00:20:49.022 }, 00:20:49.022 { 00:20:49.022 "subsystem": "iobuf", 00:20:49.022 "config": [ 00:20:49.022 { 00:20:49.022 "method": "iobuf_set_options", 00:20:49.022 "params": { 00:20:49.022 "small_pool_count": 8192, 00:20:49.022 "large_pool_count": 1024, 00:20:49.022 "small_bufsize": 8192, 00:20:49.022 "large_bufsize": 135168, 00:20:49.022 "enable_numa": false 00:20:49.022 } 00:20:49.022 } 00:20:49.022 ] 00:20:49.022 }, 00:20:49.022 { 00:20:49.022 "subsystem": "sock", 00:20:49.022 "config": [ 00:20:49.022 { 00:20:49.022 "method": "sock_set_default_impl", 00:20:49.022 "params": { 00:20:49.022 "impl_name": "posix" 00:20:49.022 } 00:20:49.022 }, 00:20:49.022 { 00:20:49.022 "method": "sock_impl_set_options", 00:20:49.022 "params": { 00:20:49.022 "impl_name": "ssl", 00:20:49.022 "recv_buf_size": 4096, 00:20:49.022 "send_buf_size": 4096, 00:20:49.022 "enable_recv_pipe": true, 00:20:49.022 "enable_quickack": false, 00:20:49.022 "enable_placement_id": 0, 00:20:49.022 "enable_zerocopy_send_server": true, 00:20:49.022 "enable_zerocopy_send_client": false, 00:20:49.022 "zerocopy_threshold": 0, 00:20:49.022 "tls_version": 0, 00:20:49.022 "enable_ktls": false 00:20:49.022 } 00:20:49.022 }, 00:20:49.022 { 00:20:49.022 "method": "sock_impl_set_options", 00:20:49.022 "params": { 00:20:49.022 "impl_name": "posix", 00:20:49.022 "recv_buf_size": 2097152, 00:20:49.022 "send_buf_size": 2097152, 00:20:49.022 "enable_recv_pipe": true, 00:20:49.022 "enable_quickack": false, 00:20:49.022 "enable_placement_id": 0, 00:20:49.022 "enable_zerocopy_send_server": true, 00:20:49.022 "enable_zerocopy_send_client": false, 00:20:49.022 "zerocopy_threshold": 0, 00:20:49.022 "tls_version": 0, 00:20:49.022 "enable_ktls": false 00:20:49.022 } 00:20:49.022 } 00:20:49.022 ] 00:20:49.022 }, 00:20:49.022 { 00:20:49.022 "subsystem": "vmd", 00:20:49.022 "config": [] 00:20:49.022 }, 00:20:49.022 { 00:20:49.022 "subsystem": "accel", 00:20:49.022 "config": [ 00:20:49.022 { 00:20:49.022 "method": "accel_set_options", 00:20:49.022 "params": { 00:20:49.022 "small_cache_size": 128, 00:20:49.022 "large_cache_size": 16, 00:20:49.022 "task_count": 2048, 00:20:49.022 "sequence_count": 2048, 00:20:49.022 "buf_count": 2048 00:20:49.022 } 00:20:49.022 } 00:20:49.022 ] 00:20:49.022 }, 00:20:49.022 { 00:20:49.022 "subsystem": "bdev", 00:20:49.022 "config": [ 00:20:49.022 { 00:20:49.022 "method": "bdev_set_options", 00:20:49.022 "params": { 00:20:49.022 "bdev_io_pool_size": 65535, 00:20:49.022 "bdev_io_cache_size": 256, 00:20:49.022 "bdev_auto_examine": true, 00:20:49.022 "iobuf_small_cache_size": 128, 00:20:49.022 "iobuf_large_cache_size": 16 00:20:49.022 } 00:20:49.022 }, 00:20:49.022 { 00:20:49.022 "method": "bdev_raid_set_options", 00:20:49.022 "params": { 00:20:49.022 "process_window_size_kb": 1024, 00:20:49.022 "process_max_bandwidth_mb_sec": 0 00:20:49.022 } 00:20:49.022 }, 00:20:49.022 { 00:20:49.022 "method": "bdev_iscsi_set_options", 00:20:49.022 "params": { 00:20:49.022 "timeout_sec": 30 00:20:49.022 } 00:20:49.022 }, 00:20:49.022 { 00:20:49.022 "method": "bdev_nvme_set_options", 00:20:49.022 "params": { 00:20:49.022 "action_on_timeout": "none", 00:20:49.022 "timeout_us": 0, 00:20:49.022 "timeout_admin_us": 0, 00:20:49.022 "keep_alive_timeout_ms": 10000, 00:20:49.022 "arbitration_burst": 0, 00:20:49.022 "low_priority_weight": 0, 00:20:49.022 "medium_priority_weight": 0, 00:20:49.022 "high_priority_weight": 0, 00:20:49.022 "nvme_adminq_poll_period_us": 10000, 00:20:49.022 "nvme_ioq_poll_period_us": 0, 00:20:49.022 "io_queue_requests": 512, 00:20:49.022 "delay_cmd_submit": true, 00:20:49.022 "transport_retry_count": 4, 00:20:49.022 "bdev_retry_count": 3, 00:20:49.022 "transport_ack_timeout": 0, 00:20:49.022 "ctrlr_loss_timeout_sec": 0, 00:20:49.022 "reconnect_delay_sec": 0, 00:20:49.022 "fast_io_fail_timeout_sec": 0, 00:20:49.022 "disable_auto_failback": false, 00:20:49.022 "generate_uuids": false, 00:20:49.022 "transport_tos": 0, 00:20:49.022 "nvme_error_stat": false, 00:20:49.022 "rdma_srq_size": 0, 00:20:49.022 "io_path_stat": false, 00:20:49.022 "allow_accel_sequence": false, 00:20:49.022 "rdma_max_cq_size": 0, 00:20:49.022 "rdma_cm_event_timeout_ms": 0, 00:20:49.022 "dhchap_digests": [ 00:20:49.022 "sha256", 00:20:49.022 "sha384", 00:20:49.022 "sha512" 00:20:49.022 ], 00:20:49.022 "dhchap_dhgroups": [ 00:20:49.022 "null", 00:20:49.022 "ffdhe2048", 00:20:49.022 "ffdhe3072", 00:20:49.022 "ffdhe4096", 00:20:49.022 "ffdhe6144", 00:20:49.022 "ffdhe8192" 00:20:49.022 ] 00:20:49.022 } 00:20:49.022 }, 00:20:49.022 { 00:20:49.022 "method": "bdev_nvme_attach_controller", 00:20:49.022 "params": { 00:20:49.022 "name": "nvme0", 00:20:49.022 "trtype": "TCP", 00:20:49.022 "adrfam": "IPv4", 00:20:49.022 "traddr": "10.0.0.2", 00:20:49.022 "trsvcid": "4420", 00:20:49.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.022 "prchk_reftag": false, 00:20:49.022 "prchk_guard": false, 00:20:49.022 "ctrlr_loss_timeout_sec": 0, 00:20:49.022 "reconnect_delay_sec": 0, 00:20:49.022 "fast_io_fail_timeout_sec": 0, 00:20:49.022 "psk": "key0", 00:20:49.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.022 "hdgst": false, 00:20:49.022 "ddgst": false, 00:20:49.022 "multipath": "multipath" 00:20:49.022 } 00:20:49.022 }, 00:20:49.022 { 00:20:49.022 "method": "bdev_nvme_set_hotplug", 00:20:49.022 "params": { 00:20:49.022 "period_us": 100000, 00:20:49.022 "enable": false 00:20:49.022 } 00:20:49.022 }, 00:20:49.022 { 00:20:49.023 "method": "bdev_enable_histogram", 00:20:49.023 "params": { 00:20:49.023 "name": "nvme0n1", 00:20:49.023 "enable": true 00:20:49.023 } 00:20:49.023 }, 00:20:49.023 { 00:20:49.023 "method": "bdev_wait_for_examine" 00:20:49.023 } 00:20:49.023 ] 00:20:49.023 }, 00:20:49.023 { 00:20:49.023 "subsystem": "nbd", 00:20:49.023 "config": [] 00:20:49.023 } 00:20:49.023 ] 00:20:49.023 }' 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1309405 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1309405 ']' 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1309405 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1309405 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1309405' 00:20:49.023 killing process with pid 1309405 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1309405 00:20:49.023 Received shutdown signal, test time was about 1.000000 seconds 00:20:49.023 00:20:49.023 Latency(us) 00:20:49.023 [2024-11-20T06:21:23.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.023 [2024-11-20T06:21:23.790Z] =================================================================================================================== 00:20:49.023 [2024-11-20T06:21:23.790Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1309405 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1309202 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1309202 ']' 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1309202 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:49.023 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1309202 00:20:49.284 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:49.284 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:49.284 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1309202' 00:20:49.284 killing process with pid 1309202 00:20:49.284 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1309202 00:20:49.284 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1309202 00:20:49.284 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:49.284 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:49.284 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:49.284 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:49.284 "subsystems": [ 00:20:49.284 { 00:20:49.284 "subsystem": "keyring", 00:20:49.284 "config": [ 00:20:49.284 { 00:20:49.284 "method": "keyring_file_add_key", 00:20:49.284 "params": { 00:20:49.284 "name": "key0", 00:20:49.284 "path": "/tmp/tmp.5ABiPJtRFp" 00:20:49.284 } 00:20:49.284 } 00:20:49.284 ] 00:20:49.284 }, 00:20:49.284 { 00:20:49.284 "subsystem": "iobuf", 00:20:49.284 "config": [ 00:20:49.284 { 00:20:49.284 "method": "iobuf_set_options", 00:20:49.284 "params": { 00:20:49.284 "small_pool_count": 8192, 00:20:49.284 "large_pool_count": 1024, 00:20:49.284 "small_bufsize": 8192, 00:20:49.284 "large_bufsize": 135168, 00:20:49.284 "enable_numa": false 00:20:49.284 } 00:20:49.284 } 00:20:49.284 ] 00:20:49.284 }, 00:20:49.284 { 00:20:49.285 "subsystem": "sock", 00:20:49.285 "config": [ 00:20:49.285 { 00:20:49.285 "method": "sock_set_default_impl", 00:20:49.285 "params": { 00:20:49.285 "impl_name": "posix" 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "sock_impl_set_options", 00:20:49.285 "params": { 00:20:49.285 "impl_name": "ssl", 00:20:49.285 "recv_buf_size": 4096, 00:20:49.285 "send_buf_size": 4096, 00:20:49.285 "enable_recv_pipe": true, 00:20:49.285 "enable_quickack": false, 00:20:49.285 "enable_placement_id": 0, 00:20:49.285 "enable_zerocopy_send_server": true, 00:20:49.285 "enable_zerocopy_send_client": false, 00:20:49.285 "zerocopy_threshold": 0, 00:20:49.285 "tls_version": 0, 00:20:49.285 "enable_ktls": false 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "sock_impl_set_options", 00:20:49.285 "params": { 00:20:49.285 "impl_name": "posix", 00:20:49.285 "recv_buf_size": 2097152, 00:20:49.285 "send_buf_size": 2097152, 00:20:49.285 "enable_recv_pipe": true, 00:20:49.285 "enable_quickack": false, 00:20:49.285 "enable_placement_id": 0, 00:20:49.285 "enable_zerocopy_send_server": true, 00:20:49.285 "enable_zerocopy_send_client": false, 00:20:49.285 "zerocopy_threshold": 0, 00:20:49.285 "tls_version": 0, 00:20:49.285 "enable_ktls": false 00:20:49.285 } 00:20:49.285 } 00:20:49.285 ] 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "subsystem": "vmd", 00:20:49.285 "config": [] 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "subsystem": "accel", 00:20:49.285 "config": [ 00:20:49.285 { 00:20:49.285 "method": "accel_set_options", 00:20:49.285 "params": { 00:20:49.285 "small_cache_size": 128, 00:20:49.285 "large_cache_size": 16, 00:20:49.285 "task_count": 2048, 00:20:49.285 "sequence_count": 2048, 00:20:49.285 "buf_count": 2048 00:20:49.285 } 00:20:49.285 } 00:20:49.285 ] 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "subsystem": "bdev", 00:20:49.285 "config": [ 00:20:49.285 { 00:20:49.285 "method": "bdev_set_options", 00:20:49.285 "params": { 00:20:49.285 "bdev_io_pool_size": 65535, 00:20:49.285 "bdev_io_cache_size": 256, 00:20:49.285 "bdev_auto_examine": true, 00:20:49.285 "iobuf_small_cache_size": 128, 00:20:49.285 "iobuf_large_cache_size": 16 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "bdev_raid_set_options", 00:20:49.285 "params": { 00:20:49.285 "process_window_size_kb": 1024, 00:20:49.285 "process_max_bandwidth_mb_sec": 0 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "bdev_iscsi_set_options", 00:20:49.285 "params": { 00:20:49.285 "timeout_sec": 30 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "bdev_nvme_set_options", 00:20:49.285 "params": { 00:20:49.285 "action_on_timeout": "none", 00:20:49.285 "timeout_us": 0, 00:20:49.285 "timeout_admin_us": 0, 00:20:49.285 "keep_alive_timeout_ms": 10000, 00:20:49.285 "arbitration_burst": 0, 00:20:49.285 "low_priority_weight": 0, 00:20:49.285 "medium_priority_weight": 0, 00:20:49.285 "high_priority_weight": 0, 00:20:49.285 "nvme_adminq_poll_period_us": 10000, 00:20:49.285 "nvme_ioq_poll_period_us": 0, 00:20:49.285 "io_queue_requests": 0, 00:20:49.285 "delay_cmd_submit": true, 00:20:49.285 "transport_retry_count": 4, 00:20:49.285 "bdev_retry_count": 3, 00:20:49.285 "transport_ack_timeout": 0, 00:20:49.285 "ctrlr_loss_timeout_sec": 0, 00:20:49.285 "reconnect_delay_sec": 0, 00:20:49.285 "fast_io_fail_timeout_sec": 0, 00:20:49.285 "disable_auto_failback": false, 00:20:49.285 "generate_uuids": false, 00:20:49.285 "transport_tos": 0, 00:20:49.285 "nvme_error_stat": false, 00:20:49.285 "rdma_srq_size": 0, 00:20:49.285 "io_path_stat": false, 00:20:49.285 "allow_accel_sequence": false, 00:20:49.285 "rdma_max_cq_size": 0, 00:20:49.285 "rdma_cm_event_timeout_ms": 0, 00:20:49.285 "dhchap_digests": [ 00:20:49.285 "sha256", 00:20:49.285 "sha384", 00:20:49.285 "sha512" 00:20:49.285 ], 00:20:49.285 "dhchap_dhgroups": [ 00:20:49.285 "null", 00:20:49.285 "ffdhe2048", 00:20:49.285 "ffdhe3072", 00:20:49.285 "ffdhe4096", 00:20:49.285 "ffdhe6144", 00:20:49.285 "ffdhe8192" 00:20:49.285 ] 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "bdev_nvme_set_hotplug", 00:20:49.285 "params": { 00:20:49.285 "period_us": 100000, 00:20:49.285 "enable": false 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "bdev_malloc_create", 00:20:49.285 "params": { 00:20:49.285 "name": "malloc0", 00:20:49.285 "num_blocks": 8192, 00:20:49.285 "block_size": 4096, 00:20:49.285 "physical_block_size": 4096, 00:20:49.285 "uuid": "cb089317-fe3b-43d6-b20d-68e960b7a0af", 00:20:49.285 "optimal_io_boundary": 0, 00:20:49.285 "md_size": 0, 00:20:49.285 "dif_type": 0, 00:20:49.285 "dif_is_head_of_md": false, 00:20:49.285 "dif_pi_format": 0 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "bdev_wait_for_examine" 00:20:49.285 } 00:20:49.285 ] 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "subsystem": "nbd", 00:20:49.285 "config": [] 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "subsystem": "scheduler", 00:20:49.285 "config": [ 00:20:49.285 { 00:20:49.285 "method": "framework_set_scheduler", 00:20:49.285 "params": { 00:20:49.285 "name": "static" 00:20:49.285 } 00:20:49.285 } 00:20:49.285 ] 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "subsystem": "nvmf", 00:20:49.285 "config": [ 00:20:49.285 { 00:20:49.285 "method": "nvmf_set_config", 00:20:49.285 "params": { 00:20:49.285 "discovery_filter": "match_any", 00:20:49.285 "admin_cmd_passthru": { 00:20:49.285 "identify_ctrlr": false 00:20:49.285 }, 00:20:49.285 "dhchap_digests": [ 00:20:49.285 "sha256", 00:20:49.285 "sha384", 00:20:49.285 "sha512" 00:20:49.285 ], 00:20:49.285 "dhchap_dhgroups": [ 00:20:49.285 "null", 00:20:49.285 "ffdhe2048", 00:20:49.285 "ffdhe3072", 00:20:49.285 "ffdhe4096", 00:20:49.285 "ffdhe6144", 00:20:49.285 "ffdhe8192" 00:20:49.285 ] 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "nvmf_set_max_subsystems", 00:20:49.285 "params": { 00:20:49.285 "max_subsystems": 1024 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "nvmf_set_crdt", 00:20:49.285 "params": { 00:20:49.285 "crdt1": 0, 00:20:49.285 "crdt2": 0, 00:20:49.285 "crdt3": 0 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "nvmf_create_transport", 00:20:49.285 "params": { 00:20:49.285 "trtype": "TCP", 00:20:49.285 "max_queue_depth": 128, 00:20:49.285 "max_io_qpairs_per_ctrlr": 127, 00:20:49.285 "in_capsule_data_size": 4096, 00:20:49.285 "max_io_size": 131072, 00:20:49.285 "io_unit_size": 131072, 00:20:49.285 "max_aq_depth": 128, 00:20:49.285 "num_shared_buffers": 511, 00:20:49.285 "buf_cache_size": 4294967295, 00:20:49.285 "dif_insert_or_strip": false, 00:20:49.285 "zcopy": false, 00:20:49.285 "c2h_success": false, 00:20:49.285 "sock_priority": 0, 00:20:49.285 "abort_timeout_sec": 1, 00:20:49.285 "ack_timeout": 0, 00:20:49.285 "data_wr_pool_size": 0 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "nvmf_create_subsystem", 00:20:49.285 "params": { 00:20:49.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.285 "allow_any_host": false, 00:20:49.285 "serial_number": "00000000000000000000", 00:20:49.285 "model_number": "SPDK bdev Controller", 00:20:49.285 "max_namespaces": 32, 00:20:49.285 "min_cntlid": 1, 00:20:49.285 "max_cntlid": 65519, 00:20:49.285 "ana_reporting": false 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "nvmf_subsystem_add_host", 00:20:49.285 "params": { 00:20:49.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.285 "host": "nqn.2016-06.io.spdk:host1", 00:20:49.285 "psk": "key0" 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "nvmf_subsystem_add_ns", 00:20:49.285 "params": { 00:20:49.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.285 "namespace": { 00:20:49.285 "nsid": 1, 00:20:49.285 "bdev_name": "malloc0", 00:20:49.285 "nguid": "CB089317FE3B43D6B20D68E960B7A0AF", 00:20:49.285 "uuid": "cb089317-fe3b-43d6-b20d-68e960b7a0af", 00:20:49.285 "no_auto_visible": false 00:20:49.285 } 00:20:49.285 } 00:20:49.285 }, 00:20:49.285 { 00:20:49.285 "method": "nvmf_subsystem_add_listener", 00:20:49.285 "params": { 00:20:49.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.285 "listen_address": { 00:20:49.285 "trtype": "TCP", 00:20:49.285 "adrfam": "IPv4", 00:20:49.285 "traddr": "10.0.0.2", 00:20:49.286 "trsvcid": "4420" 00:20:49.286 }, 00:20:49.286 "secure_channel": false, 00:20:49.286 "sock_impl": "ssl" 00:20:49.286 } 00:20:49.286 } 00:20:49.286 ] 00:20:49.286 } 00:20:49.286 ] 00:20:49.286 }' 00:20:49.286 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.286 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1309940 00:20:49.286 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1309940 00:20:49.286 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:49.286 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1309940 ']' 00:20:49.286 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.286 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:49.286 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.286 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:49.286 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.286 [2024-11-20 07:21:23.977290] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:49.286 [2024-11-20 07:21:23.977350] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.547 [2024-11-20 07:21:24.062717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.548 [2024-11-20 07:21:24.097080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.548 [2024-11-20 07:21:24.097111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.548 [2024-11-20 07:21:24.097119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.548 [2024-11-20 07:21:24.097126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.548 [2024-11-20 07:21:24.097132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.548 [2024-11-20 07:21:24.097744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.548 [2024-11-20 07:21:24.297897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.809 [2024-11-20 07:21:24.329914] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:49.809 [2024-11-20 07:21:24.330139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1310261 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1310261 /var/tmp/bdevperf.sock 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1310261 ']' 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.070 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:50.070 "subsystems": [ 00:20:50.070 { 00:20:50.070 "subsystem": "keyring", 00:20:50.070 "config": [ 00:20:50.070 { 00:20:50.070 "method": "keyring_file_add_key", 00:20:50.070 "params": { 00:20:50.070 "name": "key0", 00:20:50.070 "path": "/tmp/tmp.5ABiPJtRFp" 00:20:50.070 } 00:20:50.070 } 00:20:50.070 ] 00:20:50.070 }, 00:20:50.070 { 00:20:50.070 "subsystem": "iobuf", 00:20:50.070 "config": [ 00:20:50.070 { 00:20:50.070 "method": "iobuf_set_options", 00:20:50.070 "params": { 00:20:50.070 "small_pool_count": 8192, 00:20:50.070 "large_pool_count": 1024, 00:20:50.070 "small_bufsize": 8192, 00:20:50.070 "large_bufsize": 135168, 00:20:50.070 "enable_numa": false 00:20:50.070 } 00:20:50.070 } 00:20:50.070 ] 00:20:50.070 }, 00:20:50.070 { 00:20:50.070 "subsystem": "sock", 00:20:50.070 "config": [ 00:20:50.070 { 00:20:50.070 "method": "sock_set_default_impl", 00:20:50.070 "params": { 00:20:50.070 "impl_name": "posix" 00:20:50.070 } 00:20:50.070 }, 00:20:50.070 { 00:20:50.070 "method": "sock_impl_set_options", 00:20:50.070 "params": { 00:20:50.070 "impl_name": "ssl", 00:20:50.070 "recv_buf_size": 4096, 00:20:50.070 "send_buf_size": 4096, 00:20:50.070 "enable_recv_pipe": true, 00:20:50.070 "enable_quickack": false, 00:20:50.071 "enable_placement_id": 0, 00:20:50.071 "enable_zerocopy_send_server": true, 00:20:50.071 "enable_zerocopy_send_client": false, 00:20:50.071 "zerocopy_threshold": 0, 00:20:50.071 "tls_version": 0, 00:20:50.071 "enable_ktls": false 00:20:50.071 } 00:20:50.071 }, 00:20:50.071 { 00:20:50.071 "method": "sock_impl_set_options", 00:20:50.071 "params": { 00:20:50.071 "impl_name": "posix", 00:20:50.071 "recv_buf_size": 2097152, 00:20:50.071 "send_buf_size": 2097152, 00:20:50.071 "enable_recv_pipe": true, 00:20:50.071 "enable_quickack": false, 00:20:50.071 "enable_placement_id": 0, 00:20:50.071 "enable_zerocopy_send_server": true, 00:20:50.071 "enable_zerocopy_send_client": false, 00:20:50.071 "zerocopy_threshold": 0, 00:20:50.071 "tls_version": 0, 00:20:50.071 "enable_ktls": false 00:20:50.071 } 00:20:50.071 } 00:20:50.071 ] 00:20:50.071 }, 00:20:50.071 { 00:20:50.071 "subsystem": "vmd", 00:20:50.071 "config": [] 00:20:50.071 }, 00:20:50.071 { 00:20:50.071 "subsystem": "accel", 00:20:50.071 "config": [ 00:20:50.071 { 00:20:50.071 "method": "accel_set_options", 00:20:50.071 "params": { 00:20:50.071 "small_cache_size": 128, 00:20:50.071 "large_cache_size": 16, 00:20:50.071 "task_count": 2048, 00:20:50.071 "sequence_count": 2048, 00:20:50.071 "buf_count": 2048 00:20:50.071 } 00:20:50.071 } 00:20:50.071 ] 00:20:50.071 }, 00:20:50.071 { 00:20:50.071 "subsystem": "bdev", 00:20:50.071 "config": [ 00:20:50.071 { 00:20:50.071 "method": "bdev_set_options", 00:20:50.071 "params": { 00:20:50.071 "bdev_io_pool_size": 65535, 00:20:50.071 "bdev_io_cache_size": 256, 00:20:50.071 "bdev_auto_examine": true, 00:20:50.071 "iobuf_small_cache_size": 128, 00:20:50.071 "iobuf_large_cache_size": 16 00:20:50.071 } 00:20:50.071 }, 00:20:50.071 { 00:20:50.071 "method": "bdev_raid_set_options", 00:20:50.071 "params": { 00:20:50.071 "process_window_size_kb": 1024, 00:20:50.071 "process_max_bandwidth_mb_sec": 0 00:20:50.071 } 00:20:50.071 }, 00:20:50.071 { 00:20:50.071 "method": "bdev_iscsi_set_options", 00:20:50.071 "params": { 00:20:50.071 "timeout_sec": 30 00:20:50.071 } 00:20:50.071 }, 00:20:50.071 { 00:20:50.071 "method": "bdev_nvme_set_options", 00:20:50.071 "params": { 00:20:50.071 "action_on_timeout": "none", 00:20:50.071 "timeout_us": 0, 00:20:50.071 "timeout_admin_us": 0, 00:20:50.071 "keep_alive_timeout_ms": 10000, 00:20:50.071 "arbitration_burst": 0, 00:20:50.071 "low_priority_weight": 0, 00:20:50.071 "medium_priority_weight": 0, 00:20:50.071 "high_priority_weight": 0, 00:20:50.071 "nvme_adminq_poll_period_us": 10000, 00:20:50.071 "nvme_ioq_poll_period_us": 0, 00:20:50.071 "io_queue_requests": 512, 00:20:50.071 "delay_cmd_submit": true, 00:20:50.071 "transport_retry_count": 4, 00:20:50.071 "bdev_retry_count": 3, 00:20:50.071 "transport_ack_timeout": 0, 00:20:50.071 "ctrlr_loss_timeout_sec": 0, 00:20:50.071 "reconnect_delay_sec": 0, 00:20:50.071 "fast_io_fail_timeout_sec": 0, 00:20:50.071 "disable_auto_failback": false, 00:20:50.071 "generate_uuids": false, 00:20:50.071 "transport_tos": 0, 00:20:50.071 "nvme_error_stat": false, 00:20:50.071 "rdma_srq_size": 0, 00:20:50.071 "io_path_stat": false, 00:20:50.071 "allow_accel_sequence": false, 00:20:50.071 "rdma_max_cq_size": 0, 00:20:50.071 "rdma_cm_event_timeout_ms": 0, 00:20:50.071 "dhchap_digests": [ 00:20:50.071 "sha256", 00:20:50.071 "sha384", 00:20:50.071 "sha512" 00:20:50.071 ], 00:20:50.071 "dhchap_dhgroups": [ 00:20:50.071 "null", 00:20:50.071 "ffdhe2048", 00:20:50.071 "ffdhe3072", 00:20:50.071 "ffdhe4096", 00:20:50.071 "ffdhe6144", 00:20:50.071 "ffdhe8192" 00:20:50.071 ] 00:20:50.071 } 00:20:50.071 }, 00:20:50.071 { 00:20:50.071 "method": "bdev_nvme_attach_controller", 00:20:50.071 "params": { 00:20:50.071 "name": "nvme0", 00:20:50.071 "trtype": "TCP", 00:20:50.071 "adrfam": "IPv4", 00:20:50.071 "traddr": "10.0.0.2", 00:20:50.071 "trsvcid": "4420", 00:20:50.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.071 "prchk_reftag": false, 00:20:50.071 "prchk_guard": false, 00:20:50.071 "ctrlr_loss_timeout_sec": 0, 00:20:50.071 "reconnect_delay_sec": 0, 00:20:50.071 "fast_io_fail_timeout_sec": 0, 00:20:50.071 "psk": "key0", 00:20:50.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:50.071 "hdgst": false, 00:20:50.071 "ddgst": false, 00:20:50.071 "multipath": "multipath" 00:20:50.071 } 00:20:50.071 }, 00:20:50.071 { 00:20:50.071 "method": "bdev_nvme_set_hotplug", 00:20:50.071 "params": { 00:20:50.071 "period_us": 100000, 00:20:50.071 "enable": false 00:20:50.071 } 00:20:50.071 }, 00:20:50.071 { 00:20:50.071 "method": "bdev_enable_histogram", 00:20:50.071 "params": { 00:20:50.071 "name": "nvme0n1", 00:20:50.071 "enable": true 00:20:50.071 } 00:20:50.071 }, 00:20:50.071 { 00:20:50.071 "method": "bdev_wait_for_examine" 00:20:50.071 } 00:20:50.071 ] 00:20:50.071 }, 00:20:50.071 { 00:20:50.071 "subsystem": "nbd", 00:20:50.071 "config": [] 00:20:50.071 } 00:20:50.071 ] 00:20:50.071 }' 00:20:50.332 [2024-11-20 07:21:24.862810] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:20:50.332 [2024-11-20 07:21:24.862860] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310261 ] 00:20:50.333 [2024-11-20 07:21:24.952086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.333 [2024-11-20 07:21:24.981850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.593 [2024-11-20 07:21:25.117903] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.164 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:51.164 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:51.164 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:51.164 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:51.164 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.164 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:51.164 Running I/O for 1 seconds... 00:20:52.546 5371.00 IOPS, 20.98 MiB/s 00:20:52.546 Latency(us) 00:20:52.546 [2024-11-20T06:21:27.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.546 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:52.546 Verification LBA range: start 0x0 length 0x2000 00:20:52.546 nvme0n1 : 1.03 5345.84 20.88 0.00 0.00 23683.99 6280.53 58982.40 00:20:52.546 [2024-11-20T06:21:27.313Z] =================================================================================================================== 00:20:52.546 [2024-11-20T06:21:27.313Z] Total : 5345.84 20.88 0.00 0.00 23683.99 6280.53 58982.40 00:20:52.546 { 00:20:52.546 "results": [ 00:20:52.546 { 00:20:52.546 "job": "nvme0n1", 00:20:52.546 "core_mask": "0x2", 00:20:52.546 "workload": "verify", 00:20:52.546 "status": "finished", 00:20:52.546 "verify_range": { 00:20:52.546 "start": 0, 00:20:52.546 "length": 8192 00:20:52.546 }, 00:20:52.546 "queue_depth": 128, 00:20:52.546 "io_size": 4096, 00:20:52.546 "runtime": 1.02865, 00:20:52.546 "iops": 5345.841637097166, 00:20:52.546 "mibps": 20.882193894910806, 00:20:52.546 "io_failed": 0, 00:20:52.546 "io_timeout": 0, 00:20:52.546 "avg_latency_us": 23683.989670849245, 00:20:52.546 "min_latency_us": 6280.533333333334, 00:20:52.546 "max_latency_us": 58982.4 00:20:52.546 } 00:20:52.546 ], 00:20:52.546 "core_count": 1 00:20:52.546 } 00:20:52.546 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:52.546 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:52.546 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:52.546 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:20:52.546 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:20:52.546 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:52.546 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:52.546 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:52.547 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:52.547 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:52.547 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:52.547 nvmf_trace.0 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1310261 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1310261 ']' 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1310261 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1310261 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1310261' 00:20:52.547 killing process with pid 1310261 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1310261 00:20:52.547 Received shutdown signal, test time was about 1.000000 seconds 00:20:52.547 00:20:52.547 Latency(us) 00:20:52.547 [2024-11-20T06:21:27.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.547 [2024-11-20T06:21:27.314Z] =================================================================================================================== 00:20:52.547 [2024-11-20T06:21:27.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1310261 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:52.547 rmmod nvme_tcp 00:20:52.547 rmmod nvme_fabrics 00:20:52.547 rmmod nvme_keyring 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1309940 ']' 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1309940 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1309940 ']' 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1309940 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:52.547 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1309940 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1309940' 00:20:52.807 killing process with pid 1309940 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1309940 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1309940 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.807 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.351 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:55.351 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.WEHR1tfiTz /tmp/tmp.KOGKGNQnkf /tmp/tmp.5ABiPJtRFp 00:20:55.351 00:20:55.351 real 1m24.110s 00:20:55.351 user 2m9.685s 00:20:55.352 sys 0m27.129s 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.352 ************************************ 00:20:55.352 END TEST nvmf_tls 00:20:55.352 ************************************ 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:55.352 ************************************ 00:20:55.352 START TEST nvmf_fips 00:20:55.352 ************************************ 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:55.352 * Looking for test storage... 00:20:55.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:55.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.352 --rc genhtml_branch_coverage=1 00:20:55.352 --rc genhtml_function_coverage=1 00:20:55.352 --rc genhtml_legend=1 00:20:55.352 --rc geninfo_all_blocks=1 00:20:55.352 --rc geninfo_unexecuted_blocks=1 00:20:55.352 00:20:55.352 ' 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:55.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.352 --rc genhtml_branch_coverage=1 00:20:55.352 --rc genhtml_function_coverage=1 00:20:55.352 --rc genhtml_legend=1 00:20:55.352 --rc geninfo_all_blocks=1 00:20:55.352 --rc geninfo_unexecuted_blocks=1 00:20:55.352 00:20:55.352 ' 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:55.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.352 --rc genhtml_branch_coverage=1 00:20:55.352 --rc genhtml_function_coverage=1 00:20:55.352 --rc genhtml_legend=1 00:20:55.352 --rc geninfo_all_blocks=1 00:20:55.352 --rc geninfo_unexecuted_blocks=1 00:20:55.352 00:20:55.352 ' 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:55.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.352 --rc genhtml_branch_coverage=1 00:20:55.352 --rc genhtml_function_coverage=1 00:20:55.352 --rc genhtml_legend=1 00:20:55.352 --rc geninfo_all_blocks=1 00:20:55.352 --rc geninfo_unexecuted_blocks=1 00:20:55.352 00:20:55.352 ' 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:55.352 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:55.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:55.353 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:55.353 Error setting digest 00:20:55.353 40520989A07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:55.353 40520989A07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:55.353 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:03.507 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.507 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:03.507 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:03.507 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:03.507 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:03.507 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:03.507 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:03.507 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:03.507 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:03.507 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:03.507 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:03.508 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.508 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:03.508 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:03.508 Found net devices under 0000:31:00.0: cvl_0_0 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:03.508 Found net devices under 0000:31:00.1: cvl_0_1 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:03.508 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:03.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:21:03.770 00:21:03.770 --- 10.0.0.2 ping statistics --- 00:21:03.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.770 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:21:03.770 00:21:03.770 --- 10.0.0.1 ping statistics --- 00:21:03.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.770 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1315463 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1315463 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 1315463 ']' 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:03.770 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:03.770 [2024-11-20 07:21:38.454354] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:21:03.770 [2024-11-20 07:21:38.454426] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.031 [2024-11-20 07:21:38.564148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.031 [2024-11-20 07:21:38.614519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.031 [2024-11-20 07:21:38.614572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.031 [2024-11-20 07:21:38.614581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.031 [2024-11-20 07:21:38.614588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.031 [2024-11-20 07:21:38.614594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.031 [2024-11-20 07:21:38.615382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.xTu 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.xTu 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.xTu 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.xTu 00:21:04.603 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:04.864 [2024-11-20 07:21:39.450056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.864 [2024-11-20 07:21:39.466048] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:04.864 [2024-11-20 07:21:39.466275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.864 malloc0 00:21:04.864 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:04.864 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1315680 00:21:04.864 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1315680 /var/tmp/bdevperf.sock 00:21:04.864 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:04.864 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 1315680 ']' 00:21:04.864 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.864 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:04.864 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.864 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:04.864 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:04.864 [2024-11-20 07:21:39.582988] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:21:04.864 [2024-11-20 07:21:39.583055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1315680 ] 00:21:05.123 [2024-11-20 07:21:39.651779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.123 [2024-11-20 07:21:39.686292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.693 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:05.693 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:21:05.693 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.xTu 00:21:05.953 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:05.953 [2024-11-20 07:21:40.661329] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.213 TLSTESTn1 00:21:06.213 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:06.213 Running I/O for 10 seconds... 00:21:08.100 5777.00 IOPS, 22.57 MiB/s [2024-11-20T06:21:44.252Z] 6073.50 IOPS, 23.72 MiB/s [2024-11-20T06:21:45.194Z] 6095.67 IOPS, 23.81 MiB/s [2024-11-20T06:21:46.137Z] 6156.50 IOPS, 24.05 MiB/s [2024-11-20T06:21:47.078Z] 6090.80 IOPS, 23.79 MiB/s [2024-11-20T06:21:48.018Z] 6128.67 IOPS, 23.94 MiB/s [2024-11-20T06:21:48.960Z] 6086.43 IOPS, 23.78 MiB/s [2024-11-20T06:21:49.901Z] 6109.88 IOPS, 23.87 MiB/s [2024-11-20T06:21:51.287Z] 6044.78 IOPS, 23.61 MiB/s [2024-11-20T06:21:51.287Z] 6042.40 IOPS, 23.60 MiB/s 00:21:16.520 Latency(us) 00:21:16.520 [2024-11-20T06:21:51.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.520 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:16.520 Verification LBA range: start 0x0 length 0x2000 00:21:16.520 TLSTESTn1 : 10.02 6045.87 23.62 0.00 0.00 21135.00 6144.00 23702.19 00:21:16.520 [2024-11-20T06:21:51.287Z] =================================================================================================================== 00:21:16.520 [2024-11-20T06:21:51.287Z] Total : 6045.87 23.62 0.00 0.00 21135.00 6144.00 23702.19 00:21:16.520 { 00:21:16.520 "results": [ 00:21:16.520 { 00:21:16.520 "job": "TLSTESTn1", 00:21:16.520 "core_mask": "0x4", 00:21:16.520 "workload": "verify", 00:21:16.520 "status": "finished", 00:21:16.520 "verify_range": { 00:21:16.520 "start": 0, 00:21:16.520 "length": 8192 00:21:16.520 }, 00:21:16.520 "queue_depth": 128, 00:21:16.520 "io_size": 4096, 00:21:16.520 "runtime": 10.015434, 00:21:16.520 "iops": 6045.868806084689, 00:21:16.520 "mibps": 23.616675023768316, 00:21:16.520 "io_failed": 0, 00:21:16.520 "io_timeout": 0, 00:21:16.520 "avg_latency_us": 21135.00216144801, 00:21:16.520 "min_latency_us": 6144.0, 00:21:16.520 "max_latency_us": 23702.18666666667 00:21:16.520 } 00:21:16.520 ], 00:21:16.520 "core_count": 1 00:21:16.520 } 00:21:16.520 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:16.520 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:16.520 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:21:16.520 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:21:16.520 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:21:16.520 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:16.520 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:21:16.520 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:21:16.520 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:21:16.520 07:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:16.520 nvmf_trace.0 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1315680 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 1315680 ']' 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 1315680 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1315680 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1315680' 00:21:16.520 killing process with pid 1315680 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 1315680 00:21:16.520 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.520 00:21:16.520 Latency(us) 00:21:16.520 [2024-11-20T06:21:51.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.520 [2024-11-20T06:21:51.287Z] =================================================================================================================== 00:21:16.520 [2024-11-20T06:21:51.287Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 1315680 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.520 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:16.520 rmmod nvme_tcp 00:21:16.521 rmmod nvme_fabrics 00:21:16.521 rmmod nvme_keyring 00:21:16.521 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:16.521 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:16.521 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:16.521 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1315463 ']' 00:21:16.521 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1315463 00:21:16.521 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 1315463 ']' 00:21:16.521 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 1315463 00:21:16.521 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:16.521 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.521 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1315463 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1315463' 00:21:16.782 killing process with pid 1315463 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 1315463 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 1315463 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.782 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.xTu 00:21:19.329 00:21:19.329 real 0m23.887s 00:21:19.329 user 0m23.328s 00:21:19.329 sys 0m10.318s 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:19.329 ************************************ 00:21:19.329 END TEST nvmf_fips 00:21:19.329 ************************************ 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:19.329 ************************************ 00:21:19.329 START TEST nvmf_control_msg_list 00:21:19.329 ************************************ 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:19.329 * Looking for test storage... 00:21:19.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.329 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:19.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.329 --rc genhtml_branch_coverage=1 00:21:19.329 --rc genhtml_function_coverage=1 00:21:19.329 --rc genhtml_legend=1 00:21:19.329 --rc geninfo_all_blocks=1 00:21:19.330 --rc geninfo_unexecuted_blocks=1 00:21:19.330 00:21:19.330 ' 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:19.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.330 --rc genhtml_branch_coverage=1 00:21:19.330 --rc genhtml_function_coverage=1 00:21:19.330 --rc genhtml_legend=1 00:21:19.330 --rc geninfo_all_blocks=1 00:21:19.330 --rc geninfo_unexecuted_blocks=1 00:21:19.330 00:21:19.330 ' 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:19.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.330 --rc genhtml_branch_coverage=1 00:21:19.330 --rc genhtml_function_coverage=1 00:21:19.330 --rc genhtml_legend=1 00:21:19.330 --rc geninfo_all_blocks=1 00:21:19.330 --rc geninfo_unexecuted_blocks=1 00:21:19.330 00:21:19.330 ' 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:19.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.330 --rc genhtml_branch_coverage=1 00:21:19.330 --rc genhtml_function_coverage=1 00:21:19.330 --rc genhtml_legend=1 00:21:19.330 --rc geninfo_all_blocks=1 00:21:19.330 --rc geninfo_unexecuted_blocks=1 00:21:19.330 00:21:19.330 ' 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.330 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:27.472 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:27.472 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:27.472 Found net devices under 0000:31:00.0: cvl_0_0 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:27.472 Found net devices under 0000:31:00.1: cvl_0_1 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:27.472 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.473 07:22:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:27.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:21:27.473 00:21:27.473 --- 10.0.0.2 ping statistics --- 00:21:27.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.473 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:21:27.473 00:21:27.473 --- 10.0.0.1 ping statistics --- 00:21:27.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.473 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:27.473 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:27.734 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:27.734 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.734 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:27.734 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:27.734 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1322709 00:21:27.734 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1322709 00:21:27.734 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:27.734 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 1322709 ']' 00:21:27.734 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.734 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:27.734 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.734 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:27.734 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:27.734 [2024-11-20 07:22:02.326367] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:21:27.734 [2024-11-20 07:22:02.326418] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.734 [2024-11-20 07:22:02.414517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.734 [2024-11-20 07:22:02.448913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.734 [2024-11-20 07:22:02.448949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.734 [2024-11-20 07:22:02.448959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.734 [2024-11-20 07:22:02.448967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.734 [2024-11-20 07:22:02.448974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.734 [2024-11-20 07:22:02.449577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:27.995 [2024-11-20 07:22:02.581476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:27.995 Malloc0 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.995 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:27.996 [2024-11-20 07:22:02.632357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.996 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.996 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1322729 00:21:27.996 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:27.996 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1322730 00:21:27.996 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:27.996 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1322731 00:21:27.996 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1322729 00:21:27.996 07:22:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:27.996 [2024-11-20 07:22:02.702781] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:27.996 [2024-11-20 07:22:02.732932] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:27.996 [2024-11-20 07:22:02.733208] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:29.384 Initializing NVMe Controllers 00:21:29.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:29.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:29.384 Initialization complete. Launching workers. 00:21:29.384 ======================================================== 00:21:29.384 Latency(us) 00:21:29.384 Device Information : IOPS MiB/s Average min max 00:21:29.385 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41141.33 40847.02 41950.84 00:21:29.385 ======================================================== 00:21:29.385 Total : 25.00 0.10 41141.33 40847.02 41950.84 00:21:29.385 00:21:29.385 Initializing NVMe Controllers 00:21:29.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:29.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:29.385 Initialization complete. Launching workers. 00:21:29.385 ======================================================== 00:21:29.385 Latency(us) 00:21:29.385 Device Information : IOPS MiB/s Average min max 00:21:29.385 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1543.00 6.03 647.96 298.68 892.67 00:21:29.385 ======================================================== 00:21:29.385 Total : 1543.00 6.03 647.96 298.68 892.67 00:21:29.385 00:21:29.385 Initializing NVMe Controllers 00:21:29.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:29.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:29.385 Initialization complete. Launching workers. 00:21:29.385 ======================================================== 00:21:29.385 Latency(us) 00:21:29.385 Device Information : IOPS MiB/s Average min max 00:21:29.385 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2119.00 8.28 471.77 166.83 701.83 00:21:29.385 ======================================================== 00:21:29.385 Total : 2119.00 8.28 471.77 166.83 701.83 00:21:29.385 00:21:29.385 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1322730 00:21:29.385 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1322731 00:21:29.385 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:29.385 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:29.385 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:29.385 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:29.385 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:29.385 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:29.385 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:29.385 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:29.385 rmmod nvme_tcp 00:21:29.385 rmmod nvme_fabrics 00:21:29.385 rmmod nvme_keyring 00:21:29.385 07:22:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1322709 ']' 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1322709 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 1322709 ']' 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 1322709 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1322709 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1322709' 00:21:29.385 killing process with pid 1322709 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 1322709 00:21:29.385 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 1322709 00:21:29.684 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:29.684 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:29.684 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:29.684 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:29.684 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:29.684 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:29.684 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:29.684 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:29.684 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:29.684 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.684 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.684 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.668 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:31.668 00:21:31.668 real 0m12.685s 00:21:31.668 user 0m7.614s 00:21:31.668 sys 0m7.077s 00:21:31.668 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:31.668 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:31.668 ************************************ 00:21:31.668 END TEST nvmf_control_msg_list 00:21:31.668 ************************************ 00:21:31.668 07:22:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:31.668 07:22:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:31.668 07:22:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:31.668 07:22:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:31.668 ************************************ 00:21:31.668 START TEST nvmf_wait_for_buf 00:21:31.668 ************************************ 00:21:31.668 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:31.931 * Looking for test storage... 00:21:31.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:31.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.931 --rc genhtml_branch_coverage=1 00:21:31.931 --rc genhtml_function_coverage=1 00:21:31.931 --rc genhtml_legend=1 00:21:31.931 --rc geninfo_all_blocks=1 00:21:31.931 --rc geninfo_unexecuted_blocks=1 00:21:31.931 00:21:31.931 ' 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:31.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.931 --rc genhtml_branch_coverage=1 00:21:31.931 --rc genhtml_function_coverage=1 00:21:31.931 --rc genhtml_legend=1 00:21:31.931 --rc geninfo_all_blocks=1 00:21:31.931 --rc geninfo_unexecuted_blocks=1 00:21:31.931 00:21:31.931 ' 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:31.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.931 --rc genhtml_branch_coverage=1 00:21:31.931 --rc genhtml_function_coverage=1 00:21:31.931 --rc genhtml_legend=1 00:21:31.931 --rc geninfo_all_blocks=1 00:21:31.931 --rc geninfo_unexecuted_blocks=1 00:21:31.931 00:21:31.931 ' 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:31.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.931 --rc genhtml_branch_coverage=1 00:21:31.931 --rc genhtml_function_coverage=1 00:21:31.931 --rc genhtml_legend=1 00:21:31.931 --rc geninfo_all_blocks=1 00:21:31.931 --rc geninfo_unexecuted_blocks=1 00:21:31.931 00:21:31.931 ' 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.931 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:31.932 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:40.074 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:40.074 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.074 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:40.075 Found net devices under 0000:31:00.0: cvl_0_0 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:40.075 Found net devices under 0000:31:00.1: cvl_0_1 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.075 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.336 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.336 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.336 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.336 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.336 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.336 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.336 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:21:40.597 00:21:40.597 --- 10.0.0.2 ping statistics --- 00:21:40.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.597 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:21:40.597 00:21:40.597 --- 10.0.0.1 ping statistics --- 00:21:40.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.597 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1327756 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1327756 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 1327756 ']' 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:40.597 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:40.597 [2024-11-20 07:22:15.247472] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:21:40.597 [2024-11-20 07:22:15.247538] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.597 [2024-11-20 07:22:15.339484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.857 [2024-11-20 07:22:15.380463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.858 [2024-11-20 07:22:15.380499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.858 [2024-11-20 07:22:15.380506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.858 [2024-11-20 07:22:15.380517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.858 [2024-11-20 07:22:15.380523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.858 [2024-11-20 07:22:15.381091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.427 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:41.427 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:21:41.427 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:41.428 Malloc0 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:41.428 [2024-11-20 07:22:16.172961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.428 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:41.688 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.688 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:41.688 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.688 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:41.688 [2024-11-20 07:22:16.209188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.688 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.688 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:41.688 [2024-11-20 07:22:16.315951] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:43.073 Initializing NVMe Controllers 00:21:43.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:43.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:43.073 Initialization complete. Launching workers. 00:21:43.073 ======================================================== 00:21:43.073 Latency(us) 00:21:43.073 Device Information : IOPS MiB/s Average min max 00:21:43.073 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 165843.29 47876.35 191552.22 00:21:43.073 ======================================================== 00:21:43.073 Total : 25.00 3.12 165843.29 47876.35 191552.22 00:21:43.073 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.073 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.073 rmmod nvme_tcp 00:21:43.073 rmmod nvme_fabrics 00:21:43.333 rmmod nvme_keyring 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1327756 ']' 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1327756 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 1327756 ']' 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 1327756 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1327756 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1327756' 00:21:43.333 killing process with pid 1327756 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 1327756 00:21:43.333 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 1327756 00:21:43.333 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.333 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.333 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.333 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:43.333 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:43.333 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.333 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.333 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.333 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.333 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.333 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.333 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.877 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:45.877 00:21:45.877 real 0m13.766s 00:21:45.877 user 0m5.415s 00:21:45.877 sys 0m6.903s 00:21:45.877 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:45.877 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:45.877 ************************************ 00:21:45.877 END TEST nvmf_wait_for_buf 00:21:45.877 ************************************ 00:21:45.877 07:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:45.877 07:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:45.877 07:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:45.877 07:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:45.877 07:22:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:45.877 07:22:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:54.021 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:54.021 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:54.021 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:54.022 Found net devices under 0000:31:00.0: cvl_0_0 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:54.022 Found net devices under 0000:31:00.1: cvl_0_1 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:54.022 ************************************ 00:21:54.022 START TEST nvmf_perf_adq 00:21:54.022 ************************************ 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:54.022 * Looking for test storage... 00:21:54.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:54.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.022 --rc genhtml_branch_coverage=1 00:21:54.022 --rc genhtml_function_coverage=1 00:21:54.022 --rc genhtml_legend=1 00:21:54.022 --rc geninfo_all_blocks=1 00:21:54.022 --rc geninfo_unexecuted_blocks=1 00:21:54.022 00:21:54.022 ' 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:54.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.022 --rc genhtml_branch_coverage=1 00:21:54.022 --rc genhtml_function_coverage=1 00:21:54.022 --rc genhtml_legend=1 00:21:54.022 --rc geninfo_all_blocks=1 00:21:54.022 --rc geninfo_unexecuted_blocks=1 00:21:54.022 00:21:54.022 ' 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:54.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.022 --rc genhtml_branch_coverage=1 00:21:54.022 --rc genhtml_function_coverage=1 00:21:54.022 --rc genhtml_legend=1 00:21:54.022 --rc geninfo_all_blocks=1 00:21:54.022 --rc geninfo_unexecuted_blocks=1 00:21:54.022 00:21:54.022 ' 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:54.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.022 --rc genhtml_branch_coverage=1 00:21:54.022 --rc genhtml_function_coverage=1 00:21:54.022 --rc genhtml_legend=1 00:21:54.022 --rc geninfo_all_blocks=1 00:21:54.022 --rc geninfo_unexecuted_blocks=1 00:21:54.022 00:21:54.022 ' 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.022 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.023 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:02.199 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.199 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:02.200 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:02.200 Found net devices under 0000:31:00.0: cvl_0_0 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:02.200 Found net devices under 0000:31:00.1: cvl_0_1 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:02.200 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:04.111 07:22:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:06.020 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:11.308 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:11.308 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:11.308 Found net devices under 0000:31:00.0: cvl_0_0 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:11.308 Found net devices under 0000:31:00.1: cvl_0_1 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.308 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:11.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:22:11.309 00:22:11.309 --- 10.0.0.2 ping statistics --- 00:22:11.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.309 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:22:11.309 00:22:11.309 --- 10.0.0.1 ping statistics --- 00:22:11.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.309 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1339159 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1339159 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 1339159 ']' 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:11.309 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.309 [2024-11-20 07:22:45.956434] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:22:11.309 [2024-11-20 07:22:45.956508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.309 [2024-11-20 07:22:46.048747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.570 [2024-11-20 07:22:46.091416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.570 [2024-11-20 07:22:46.091453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.570 [2024-11-20 07:22:46.091461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.570 [2024-11-20 07:22:46.091468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.570 [2024-11-20 07:22:46.091473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.570 [2024-11-20 07:22:46.093148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.570 [2024-11-20 07:22:46.093265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.570 [2024-11-20 07:22:46.093421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.570 [2024-11-20 07:22:46.093421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.142 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.404 [2024-11-20 07:22:46.932328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.404 Malloc1 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.404 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.404 [2024-11-20 07:22:47.000318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.404 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.404 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1339392 00:22:12.404 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:12.404 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:14.318 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:14.318 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.318 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.318 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.318 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:14.318 "tick_rate": 2400000000, 00:22:14.318 "poll_groups": [ 00:22:14.318 { 00:22:14.318 "name": "nvmf_tgt_poll_group_000", 00:22:14.318 "admin_qpairs": 1, 00:22:14.318 "io_qpairs": 1, 00:22:14.318 "current_admin_qpairs": 1, 00:22:14.318 "current_io_qpairs": 1, 00:22:14.318 "pending_bdev_io": 0, 00:22:14.318 "completed_nvme_io": 19754, 00:22:14.318 "transports": [ 00:22:14.318 { 00:22:14.318 "trtype": "TCP" 00:22:14.318 } 00:22:14.318 ] 00:22:14.318 }, 00:22:14.318 { 00:22:14.318 "name": "nvmf_tgt_poll_group_001", 00:22:14.318 "admin_qpairs": 0, 00:22:14.318 "io_qpairs": 1, 00:22:14.318 "current_admin_qpairs": 0, 00:22:14.318 "current_io_qpairs": 1, 00:22:14.318 "pending_bdev_io": 0, 00:22:14.318 "completed_nvme_io": 28728, 00:22:14.318 "transports": [ 00:22:14.318 { 00:22:14.318 "trtype": "TCP" 00:22:14.318 } 00:22:14.318 ] 00:22:14.318 }, 00:22:14.318 { 00:22:14.318 "name": "nvmf_tgt_poll_group_002", 00:22:14.318 "admin_qpairs": 0, 00:22:14.318 "io_qpairs": 1, 00:22:14.318 "current_admin_qpairs": 0, 00:22:14.318 "current_io_qpairs": 1, 00:22:14.318 "pending_bdev_io": 0, 00:22:14.318 "completed_nvme_io": 21346, 00:22:14.318 "transports": [ 00:22:14.318 { 00:22:14.318 "trtype": "TCP" 00:22:14.318 } 00:22:14.318 ] 00:22:14.318 }, 00:22:14.318 { 00:22:14.318 "name": "nvmf_tgt_poll_group_003", 00:22:14.318 "admin_qpairs": 0, 00:22:14.318 "io_qpairs": 1, 00:22:14.318 "current_admin_qpairs": 0, 00:22:14.318 "current_io_qpairs": 1, 00:22:14.318 "pending_bdev_io": 0, 00:22:14.318 "completed_nvme_io": 20200, 00:22:14.318 "transports": [ 00:22:14.318 { 00:22:14.318 "trtype": "TCP" 00:22:14.318 } 00:22:14.318 ] 00:22:14.318 } 00:22:14.318 ] 00:22:14.318 }' 00:22:14.318 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:14.318 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:14.593 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:14.594 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:14.594 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1339392 00:22:22.736 Initializing NVMe Controllers 00:22:22.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:22.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:22.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:22.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:22.736 Initialization complete. Launching workers. 00:22:22.737 ======================================================== 00:22:22.737 Latency(us) 00:22:22.737 Device Information : IOPS MiB/s Average min max 00:22:22.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13830.90 54.03 4627.00 1335.63 8765.87 00:22:22.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15263.10 59.62 4192.51 1374.70 8843.95 00:22:22.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11248.80 43.94 5689.95 1705.72 11062.46 00:22:22.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13413.90 52.40 4770.84 1278.57 11150.56 00:22:22.737 ======================================================== 00:22:22.737 Total : 53756.69 209.99 4761.95 1278.57 11150.56 00:22:22.737 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:22.737 rmmod nvme_tcp 00:22:22.737 rmmod nvme_fabrics 00:22:22.737 rmmod nvme_keyring 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1339159 ']' 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1339159 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 1339159 ']' 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 1339159 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1339159 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1339159' 00:22:22.737 killing process with pid 1339159 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 1339159 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 1339159 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.737 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.283 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:25.283 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:25.283 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:25.283 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:26.668 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:28.586 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:33.878 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:33.878 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:33.878 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:33.879 Found net devices under 0000:31:00.0: cvl_0_0 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:33.879 Found net devices under 0000:31:00.1: cvl_0_1 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.711 ms 00:22:33.879 00:22:33.879 --- 10.0.0.2 ping statistics --- 00:22:33.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.879 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:22:33.879 00:22:33.879 --- 10.0.0.1 ping statistics --- 00:22:33.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.879 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:33.879 net.core.busy_poll = 1 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:33.879 net.core.busy_read = 1 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:33.879 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1344060 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1344060 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 1344060 ']' 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:34.141 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.141 [2024-11-20 07:23:08.849055] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:22:34.141 [2024-11-20 07:23:08.849121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.403 [2024-11-20 07:23:08.939403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.403 [2024-11-20 07:23:08.980867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.403 [2024-11-20 07:23:08.980908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.403 [2024-11-20 07:23:08.980917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.403 [2024-11-20 07:23:08.980924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.403 [2024-11-20 07:23:08.980930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.403 [2024-11-20 07:23:08.982640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.403 [2024-11-20 07:23:08.982759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.403 [2024-11-20 07:23:08.982923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.403 [2024-11-20 07:23:08.982923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.974 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.235 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.235 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:35.235 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.235 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.235 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.235 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:35.235 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.236 [2024-11-20 07:23:09.825367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.236 Malloc1 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.236 [2024-11-20 07:23:09.892258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1344215 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:35.236 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:37.149 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:37.149 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.149 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.409 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.409 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:37.409 "tick_rate": 2400000000, 00:22:37.409 "poll_groups": [ 00:22:37.409 { 00:22:37.409 "name": "nvmf_tgt_poll_group_000", 00:22:37.409 "admin_qpairs": 1, 00:22:37.409 "io_qpairs": 2, 00:22:37.409 "current_admin_qpairs": 1, 00:22:37.409 "current_io_qpairs": 2, 00:22:37.409 "pending_bdev_io": 0, 00:22:37.409 "completed_nvme_io": 27910, 00:22:37.409 "transports": [ 00:22:37.409 { 00:22:37.409 "trtype": "TCP" 00:22:37.409 } 00:22:37.409 ] 00:22:37.409 }, 00:22:37.409 { 00:22:37.409 "name": "nvmf_tgt_poll_group_001", 00:22:37.409 "admin_qpairs": 0, 00:22:37.409 "io_qpairs": 2, 00:22:37.409 "current_admin_qpairs": 0, 00:22:37.409 "current_io_qpairs": 2, 00:22:37.409 "pending_bdev_io": 0, 00:22:37.409 "completed_nvme_io": 38644, 00:22:37.409 "transports": [ 00:22:37.409 { 00:22:37.409 "trtype": "TCP" 00:22:37.409 } 00:22:37.409 ] 00:22:37.409 }, 00:22:37.409 { 00:22:37.409 "name": "nvmf_tgt_poll_group_002", 00:22:37.409 "admin_qpairs": 0, 00:22:37.409 "io_qpairs": 0, 00:22:37.409 "current_admin_qpairs": 0, 00:22:37.409 "current_io_qpairs": 0, 00:22:37.409 "pending_bdev_io": 0, 00:22:37.409 "completed_nvme_io": 0, 00:22:37.409 "transports": [ 00:22:37.409 { 00:22:37.409 "trtype": "TCP" 00:22:37.409 } 00:22:37.409 ] 00:22:37.409 }, 00:22:37.409 { 00:22:37.409 "name": "nvmf_tgt_poll_group_003", 00:22:37.409 "admin_qpairs": 0, 00:22:37.409 "io_qpairs": 0, 00:22:37.409 "current_admin_qpairs": 0, 00:22:37.409 "current_io_qpairs": 0, 00:22:37.409 "pending_bdev_io": 0, 00:22:37.409 "completed_nvme_io": 0, 00:22:37.409 "transports": [ 00:22:37.409 { 00:22:37.409 "trtype": "TCP" 00:22:37.409 } 00:22:37.409 ] 00:22:37.409 } 00:22:37.409 ] 00:22:37.409 }' 00:22:37.409 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:37.409 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:37.409 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:37.409 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:37.409 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1344215 00:22:45.656 Initializing NVMe Controllers 00:22:45.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:45.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:45.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:45.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:45.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:45.656 Initialization complete. Launching workers. 00:22:45.656 ======================================================== 00:22:45.656 Latency(us) 00:22:45.656 Device Information : IOPS MiB/s Average min max 00:22:45.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9478.80 37.03 6753.43 1100.68 50890.65 00:22:45.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10865.60 42.44 5890.23 1115.64 50121.75 00:22:45.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10829.20 42.30 5910.56 1164.02 50807.33 00:22:45.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8345.40 32.60 7693.26 1314.77 51930.51 00:22:45.656 ======================================================== 00:22:45.656 Total : 39518.99 154.37 6483.60 1100.68 51930.51 00:22:45.656 00:22:45.656 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:45.656 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:45.656 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:45.656 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:45.656 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:45.656 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:45.656 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:45.656 rmmod nvme_tcp 00:22:45.656 rmmod nvme_fabrics 00:22:45.656 rmmod nvme_keyring 00:22:45.656 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:45.656 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:45.656 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:45.656 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1344060 ']' 00:22:45.656 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1344060 00:22:45.656 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 1344060 ']' 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 1344060 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1344060 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1344060' 00:22:45.657 killing process with pid 1344060 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 1344060 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 1344060 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.657 07:23:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:48.962 00:22:48.962 real 0m55.183s 00:22:48.962 user 2m49.847s 00:22:48.962 sys 0m12.129s 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.962 ************************************ 00:22:48.962 END TEST nvmf_perf_adq 00:22:48.962 ************************************ 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:48.962 ************************************ 00:22:48.962 START TEST nvmf_shutdown 00:22:48.962 ************************************ 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:48.962 * Looking for test storage... 00:22:48.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:48.962 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:48.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.963 --rc genhtml_branch_coverage=1 00:22:48.963 --rc genhtml_function_coverage=1 00:22:48.963 --rc genhtml_legend=1 00:22:48.963 --rc geninfo_all_blocks=1 00:22:48.963 --rc geninfo_unexecuted_blocks=1 00:22:48.963 00:22:48.963 ' 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:48.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.963 --rc genhtml_branch_coverage=1 00:22:48.963 --rc genhtml_function_coverage=1 00:22:48.963 --rc genhtml_legend=1 00:22:48.963 --rc geninfo_all_blocks=1 00:22:48.963 --rc geninfo_unexecuted_blocks=1 00:22:48.963 00:22:48.963 ' 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:48.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.963 --rc genhtml_branch_coverage=1 00:22:48.963 --rc genhtml_function_coverage=1 00:22:48.963 --rc genhtml_legend=1 00:22:48.963 --rc geninfo_all_blocks=1 00:22:48.963 --rc geninfo_unexecuted_blocks=1 00:22:48.963 00:22:48.963 ' 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:48.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.963 --rc genhtml_branch_coverage=1 00:22:48.963 --rc genhtml_function_coverage=1 00:22:48.963 --rc genhtml_legend=1 00:22:48.963 --rc geninfo_all_blocks=1 00:22:48.963 --rc geninfo_unexecuted_blocks=1 00:22:48.963 00:22:48.963 ' 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.963 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:49.225 ************************************ 00:22:49.225 START TEST nvmf_shutdown_tc1 00:22:49.225 ************************************ 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.225 07:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.372 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:57.373 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:57.373 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:57.373 Found net devices under 0000:31:00.0: cvl_0_0 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:57.373 Found net devices under 0000:31:00.1: cvl_0_1 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.373 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.373 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.373 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.373 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:57.373 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:22:57.635 00:22:57.635 --- 10.0.0.2 ping statistics --- 00:22:57.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.635 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:22:57.635 00:22:57.635 --- 10.0.0.1 ping statistics --- 00:22:57.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.635 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1351359 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1351359 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1351359 ']' 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:57.635 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.635 [2024-11-20 07:23:32.318938] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:22:57.635 [2024-11-20 07:23:32.319022] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.897 [2024-11-20 07:23:32.428981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.897 [2024-11-20 07:23:32.480072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.897 [2024-11-20 07:23:32.480124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.897 [2024-11-20 07:23:32.480133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.897 [2024-11-20 07:23:32.480140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.897 [2024-11-20 07:23:32.480147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.897 [2024-11-20 07:23:32.482184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.897 [2024-11-20 07:23:32.482349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.897 [2024-11-20 07:23:32.482517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:57.897 [2024-11-20 07:23:32.482518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:58.468 [2024-11-20 07:23:33.175791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.468 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:58.730 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.730 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:58.730 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.730 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:58.730 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:58.730 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.730 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:58.730 Malloc1 00:22:58.730 [2024-11-20 07:23:33.305832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.730 Malloc2 00:22:58.730 Malloc3 00:22:58.730 Malloc4 00:22:58.730 Malloc5 00:22:58.730 Malloc6 00:22:58.991 Malloc7 00:22:58.991 Malloc8 00:22:58.991 Malloc9 00:22:58.991 Malloc10 00:22:58.991 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.991 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:58.991 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.991 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1351716 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1351716 /var/tmp/bdevperf.sock 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1351716 ']' 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.992 { 00:22:58.992 "params": { 00:22:58.992 "name": "Nvme$subsystem", 00:22:58.992 "trtype": "$TEST_TRANSPORT", 00:22:58.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.992 "adrfam": "ipv4", 00:22:58.992 "trsvcid": "$NVMF_PORT", 00:22:58.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.992 "hdgst": ${hdgst:-false}, 00:22:58.992 "ddgst": ${ddgst:-false} 00:22:58.992 }, 00:22:58.992 "method": "bdev_nvme_attach_controller" 00:22:58.992 } 00:22:58.992 EOF 00:22:58.992 )") 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.992 { 00:22:58.992 "params": { 00:22:58.992 "name": "Nvme$subsystem", 00:22:58.992 "trtype": "$TEST_TRANSPORT", 00:22:58.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.992 "adrfam": "ipv4", 00:22:58.992 "trsvcid": "$NVMF_PORT", 00:22:58.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.992 "hdgst": ${hdgst:-false}, 00:22:58.992 "ddgst": ${ddgst:-false} 00:22:58.992 }, 00:22:58.992 "method": "bdev_nvme_attach_controller" 00:22:58.992 } 00:22:58.992 EOF 00:22:58.992 )") 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.992 { 00:22:58.992 "params": { 00:22:58.992 "name": "Nvme$subsystem", 00:22:58.992 "trtype": "$TEST_TRANSPORT", 00:22:58.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.992 "adrfam": "ipv4", 00:22:58.992 "trsvcid": "$NVMF_PORT", 00:22:58.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.992 "hdgst": ${hdgst:-false}, 00:22:58.992 "ddgst": ${ddgst:-false} 00:22:58.992 }, 00:22:58.992 "method": "bdev_nvme_attach_controller" 00:22:58.992 } 00:22:58.992 EOF 00:22:58.992 )") 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.992 { 00:22:58.992 "params": { 00:22:58.992 "name": "Nvme$subsystem", 00:22:58.992 "trtype": "$TEST_TRANSPORT", 00:22:58.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.992 "adrfam": "ipv4", 00:22:58.992 "trsvcid": "$NVMF_PORT", 00:22:58.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.992 "hdgst": ${hdgst:-false}, 00:22:58.992 "ddgst": ${ddgst:-false} 00:22:58.992 }, 00:22:58.992 "method": "bdev_nvme_attach_controller" 00:22:58.992 } 00:22:58.992 EOF 00:22:58.992 )") 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.992 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.992 { 00:22:58.992 "params": { 00:22:58.992 "name": "Nvme$subsystem", 00:22:58.992 "trtype": "$TEST_TRANSPORT", 00:22:58.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.992 "adrfam": "ipv4", 00:22:58.992 "trsvcid": "$NVMF_PORT", 00:22:58.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.992 "hdgst": ${hdgst:-false}, 00:22:58.992 "ddgst": ${ddgst:-false} 00:22:58.992 }, 00:22:58.992 "method": "bdev_nvme_attach_controller" 00:22:58.992 } 00:22:58.992 EOF 00:22:58.992 )") 00:22:59.253 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:59.253 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:59.253 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:59.253 { 00:22:59.253 "params": { 00:22:59.253 "name": "Nvme$subsystem", 00:22:59.253 "trtype": "$TEST_TRANSPORT", 00:22:59.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:59.253 "adrfam": "ipv4", 00:22:59.253 "trsvcid": "$NVMF_PORT", 00:22:59.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:59.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:59.253 "hdgst": ${hdgst:-false}, 00:22:59.253 "ddgst": ${ddgst:-false} 00:22:59.253 }, 00:22:59.253 "method": "bdev_nvme_attach_controller" 00:22:59.253 } 00:22:59.253 EOF 00:22:59.253 )") 00:22:59.253 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:59.253 [2024-11-20 07:23:33.767993] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:22:59.253 [2024-11-20 07:23:33.768046] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:59.253 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:59.253 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:59.253 { 00:22:59.253 "params": { 00:22:59.253 "name": "Nvme$subsystem", 00:22:59.253 "trtype": "$TEST_TRANSPORT", 00:22:59.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:59.253 "adrfam": "ipv4", 00:22:59.253 "trsvcid": "$NVMF_PORT", 00:22:59.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:59.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:59.253 "hdgst": ${hdgst:-false}, 00:22:59.253 "ddgst": ${ddgst:-false} 00:22:59.253 }, 00:22:59.253 "method": "bdev_nvme_attach_controller" 00:22:59.253 } 00:22:59.253 EOF 00:22:59.253 )") 00:22:59.253 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:59.253 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:59.253 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:59.253 { 00:22:59.253 "params": { 00:22:59.253 "name": "Nvme$subsystem", 00:22:59.253 "trtype": "$TEST_TRANSPORT", 00:22:59.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:59.253 "adrfam": "ipv4", 00:22:59.253 "trsvcid": "$NVMF_PORT", 00:22:59.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:59.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:59.253 "hdgst": ${hdgst:-false}, 00:22:59.253 "ddgst": ${ddgst:-false} 00:22:59.253 }, 00:22:59.253 "method": "bdev_nvme_attach_controller" 00:22:59.253 } 00:22:59.253 EOF 00:22:59.253 )") 00:22:59.253 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:59.254 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:59.254 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:59.254 { 00:22:59.254 "params": { 00:22:59.254 "name": "Nvme$subsystem", 00:22:59.254 "trtype": "$TEST_TRANSPORT", 00:22:59.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:59.254 "adrfam": "ipv4", 00:22:59.254 "trsvcid": "$NVMF_PORT", 00:22:59.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:59.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:59.254 "hdgst": ${hdgst:-false}, 00:22:59.254 "ddgst": ${ddgst:-false} 00:22:59.254 }, 00:22:59.254 "method": "bdev_nvme_attach_controller" 00:22:59.254 } 00:22:59.254 EOF 00:22:59.254 )") 00:22:59.254 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:59.254 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:59.254 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:59.254 { 00:22:59.254 "params": { 00:22:59.254 "name": "Nvme$subsystem", 00:22:59.254 "trtype": "$TEST_TRANSPORT", 00:22:59.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:59.254 "adrfam": "ipv4", 00:22:59.254 "trsvcid": "$NVMF_PORT", 00:22:59.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:59.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:59.254 "hdgst": ${hdgst:-false}, 00:22:59.254 "ddgst": ${ddgst:-false} 00:22:59.254 }, 00:22:59.254 "method": "bdev_nvme_attach_controller" 00:22:59.254 } 00:22:59.254 EOF 00:22:59.254 )") 00:22:59.254 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:59.254 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:59.254 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:59.254 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:59.254 "params": { 00:22:59.254 "name": "Nvme1", 00:22:59.254 "trtype": "tcp", 00:22:59.254 "traddr": "10.0.0.2", 00:22:59.254 "adrfam": "ipv4", 00:22:59.254 "trsvcid": "4420", 00:22:59.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.254 "hdgst": false, 00:22:59.254 "ddgst": false 00:22:59.254 }, 00:22:59.254 "method": "bdev_nvme_attach_controller" 00:22:59.254 },{ 00:22:59.254 "params": { 00:22:59.254 "name": "Nvme2", 00:22:59.254 "trtype": "tcp", 00:22:59.254 "traddr": "10.0.0.2", 00:22:59.254 "adrfam": "ipv4", 00:22:59.254 "trsvcid": "4420", 00:22:59.254 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:59.254 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:59.254 "hdgst": false, 00:22:59.254 "ddgst": false 00:22:59.254 }, 00:22:59.254 "method": "bdev_nvme_attach_controller" 00:22:59.254 },{ 00:22:59.254 "params": { 00:22:59.254 "name": "Nvme3", 00:22:59.254 "trtype": "tcp", 00:22:59.254 "traddr": "10.0.0.2", 00:22:59.254 "adrfam": "ipv4", 00:22:59.254 "trsvcid": "4420", 00:22:59.254 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:59.254 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:59.254 "hdgst": false, 00:22:59.254 "ddgst": false 00:22:59.254 }, 00:22:59.254 "method": "bdev_nvme_attach_controller" 00:22:59.254 },{ 00:22:59.254 "params": { 00:22:59.254 "name": "Nvme4", 00:22:59.254 "trtype": "tcp", 00:22:59.254 "traddr": "10.0.0.2", 00:22:59.254 "adrfam": "ipv4", 00:22:59.254 "trsvcid": "4420", 00:22:59.254 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:59.254 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:59.254 "hdgst": false, 00:22:59.254 "ddgst": false 00:22:59.254 }, 00:22:59.254 "method": "bdev_nvme_attach_controller" 00:22:59.254 },{ 00:22:59.254 "params": { 00:22:59.254 "name": "Nvme5", 00:22:59.254 "trtype": "tcp", 00:22:59.254 "traddr": "10.0.0.2", 00:22:59.254 "adrfam": "ipv4", 00:22:59.254 "trsvcid": "4420", 00:22:59.254 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:59.254 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:59.254 "hdgst": false, 00:22:59.254 "ddgst": false 00:22:59.254 }, 00:22:59.254 "method": "bdev_nvme_attach_controller" 00:22:59.254 },{ 00:22:59.254 "params": { 00:22:59.254 "name": "Nvme6", 00:22:59.254 "trtype": "tcp", 00:22:59.254 "traddr": "10.0.0.2", 00:22:59.254 "adrfam": "ipv4", 00:22:59.254 "trsvcid": "4420", 00:22:59.254 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:59.254 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:59.254 "hdgst": false, 00:22:59.254 "ddgst": false 00:22:59.254 }, 00:22:59.254 "method": "bdev_nvme_attach_controller" 00:22:59.254 },{ 00:22:59.254 "params": { 00:22:59.254 "name": "Nvme7", 00:22:59.254 "trtype": "tcp", 00:22:59.254 "traddr": "10.0.0.2", 00:22:59.254 "adrfam": "ipv4", 00:22:59.254 "trsvcid": "4420", 00:22:59.254 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:59.254 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:59.254 "hdgst": false, 00:22:59.254 "ddgst": false 00:22:59.254 }, 00:22:59.254 "method": "bdev_nvme_attach_controller" 00:22:59.254 },{ 00:22:59.254 "params": { 00:22:59.254 "name": "Nvme8", 00:22:59.254 "trtype": "tcp", 00:22:59.254 "traddr": "10.0.0.2", 00:22:59.254 "adrfam": "ipv4", 00:22:59.254 "trsvcid": "4420", 00:22:59.254 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:59.254 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:59.254 "hdgst": false, 00:22:59.254 "ddgst": false 00:22:59.254 }, 00:22:59.254 "method": "bdev_nvme_attach_controller" 00:22:59.254 },{ 00:22:59.254 "params": { 00:22:59.254 "name": "Nvme9", 00:22:59.254 "trtype": "tcp", 00:22:59.254 "traddr": "10.0.0.2", 00:22:59.254 "adrfam": "ipv4", 00:22:59.254 "trsvcid": "4420", 00:22:59.254 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:59.254 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:59.254 "hdgst": false, 00:22:59.254 "ddgst": false 00:22:59.254 }, 00:22:59.254 "method": "bdev_nvme_attach_controller" 00:22:59.254 },{ 00:22:59.254 "params": { 00:22:59.254 "name": "Nvme10", 00:22:59.254 "trtype": "tcp", 00:22:59.254 "traddr": "10.0.0.2", 00:22:59.254 "adrfam": "ipv4", 00:22:59.254 "trsvcid": "4420", 00:22:59.254 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:59.254 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:59.254 "hdgst": false, 00:22:59.254 "ddgst": false 00:22:59.254 }, 00:22:59.254 "method": "bdev_nvme_attach_controller" 00:22:59.254 }' 00:22:59.254 [2024-11-20 07:23:33.847664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.254 [2024-11-20 07:23:33.883904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.641 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:00.641 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:23:00.641 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:00.641 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.641 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.641 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.641 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1351716 00:23:00.641 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:00.641 07:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:01.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1351716 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:01.586 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1351359 00:23:01.586 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:01.586 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.586 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:01.586 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:01.586 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.586 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.586 { 00:23:01.586 "params": { 00:23:01.586 "name": "Nvme$subsystem", 00:23:01.586 "trtype": "$TEST_TRANSPORT", 00:23:01.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.586 "adrfam": "ipv4", 00:23:01.586 "trsvcid": "$NVMF_PORT", 00:23:01.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.586 "hdgst": ${hdgst:-false}, 00:23:01.586 "ddgst": ${ddgst:-false} 00:23:01.586 }, 00:23:01.586 "method": "bdev_nvme_attach_controller" 00:23:01.586 } 00:23:01.586 EOF 00:23:01.586 )") 00:23:01.586 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:01.586 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.586 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.586 { 00:23:01.586 "params": { 00:23:01.586 "name": "Nvme$subsystem", 00:23:01.586 "trtype": "$TEST_TRANSPORT", 00:23:01.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.587 "adrfam": "ipv4", 00:23:01.587 "trsvcid": "$NVMF_PORT", 00:23:01.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.587 "hdgst": ${hdgst:-false}, 00:23:01.587 "ddgst": ${ddgst:-false} 00:23:01.587 }, 00:23:01.587 "method": "bdev_nvme_attach_controller" 00:23:01.587 } 00:23:01.587 EOF 00:23:01.587 )") 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.587 { 00:23:01.587 "params": { 00:23:01.587 "name": "Nvme$subsystem", 00:23:01.587 "trtype": "$TEST_TRANSPORT", 00:23:01.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.587 "adrfam": "ipv4", 00:23:01.587 "trsvcid": "$NVMF_PORT", 00:23:01.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.587 "hdgst": ${hdgst:-false}, 00:23:01.587 "ddgst": ${ddgst:-false} 00:23:01.587 }, 00:23:01.587 "method": "bdev_nvme_attach_controller" 00:23:01.587 } 00:23:01.587 EOF 00:23:01.587 )") 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.587 { 00:23:01.587 "params": { 00:23:01.587 "name": "Nvme$subsystem", 00:23:01.587 "trtype": "$TEST_TRANSPORT", 00:23:01.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.587 "adrfam": "ipv4", 00:23:01.587 "trsvcid": "$NVMF_PORT", 00:23:01.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.587 "hdgst": ${hdgst:-false}, 00:23:01.587 "ddgst": ${ddgst:-false} 00:23:01.587 }, 00:23:01.587 "method": "bdev_nvme_attach_controller" 00:23:01.587 } 00:23:01.587 EOF 00:23:01.587 )") 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.587 { 00:23:01.587 "params": { 00:23:01.587 "name": "Nvme$subsystem", 00:23:01.587 "trtype": "$TEST_TRANSPORT", 00:23:01.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.587 "adrfam": "ipv4", 00:23:01.587 "trsvcid": "$NVMF_PORT", 00:23:01.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.587 "hdgst": ${hdgst:-false}, 00:23:01.587 "ddgst": ${ddgst:-false} 00:23:01.587 }, 00:23:01.587 "method": "bdev_nvme_attach_controller" 00:23:01.587 } 00:23:01.587 EOF 00:23:01.587 )") 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.587 { 00:23:01.587 "params": { 00:23:01.587 "name": "Nvme$subsystem", 00:23:01.587 "trtype": "$TEST_TRANSPORT", 00:23:01.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.587 "adrfam": "ipv4", 00:23:01.587 "trsvcid": "$NVMF_PORT", 00:23:01.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.587 "hdgst": ${hdgst:-false}, 00:23:01.587 "ddgst": ${ddgst:-false} 00:23:01.587 }, 00:23:01.587 "method": "bdev_nvme_attach_controller" 00:23:01.587 } 00:23:01.587 EOF 00:23:01.587 )") 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.587 { 00:23:01.587 "params": { 00:23:01.587 "name": "Nvme$subsystem", 00:23:01.587 "trtype": "$TEST_TRANSPORT", 00:23:01.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.587 "adrfam": "ipv4", 00:23:01.587 "trsvcid": "$NVMF_PORT", 00:23:01.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.587 "hdgst": ${hdgst:-false}, 00:23:01.587 "ddgst": ${ddgst:-false} 00:23:01.587 }, 00:23:01.587 "method": "bdev_nvme_attach_controller" 00:23:01.587 } 00:23:01.587 EOF 00:23:01.587 )") 00:23:01.587 [2024-11-20 07:23:36.260766] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:23:01.587 [2024-11-20 07:23:36.260820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352111 ] 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.587 { 00:23:01.587 "params": { 00:23:01.587 "name": "Nvme$subsystem", 00:23:01.587 "trtype": "$TEST_TRANSPORT", 00:23:01.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.587 "adrfam": "ipv4", 00:23:01.587 "trsvcid": "$NVMF_PORT", 00:23:01.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.587 "hdgst": ${hdgst:-false}, 00:23:01.587 "ddgst": ${ddgst:-false} 00:23:01.587 }, 00:23:01.587 "method": "bdev_nvme_attach_controller" 00:23:01.587 } 00:23:01.587 EOF 00:23:01.587 )") 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.587 { 00:23:01.587 "params": { 00:23:01.587 "name": "Nvme$subsystem", 00:23:01.587 "trtype": "$TEST_TRANSPORT", 00:23:01.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.587 "adrfam": "ipv4", 00:23:01.587 "trsvcid": "$NVMF_PORT", 00:23:01.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.587 "hdgst": ${hdgst:-false}, 00:23:01.587 "ddgst": ${ddgst:-false} 00:23:01.587 }, 00:23:01.587 "method": "bdev_nvme_attach_controller" 00:23:01.587 } 00:23:01.587 EOF 00:23:01.587 )") 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.587 { 00:23:01.587 "params": { 00:23:01.587 "name": "Nvme$subsystem", 00:23:01.587 "trtype": "$TEST_TRANSPORT", 00:23:01.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.587 "adrfam": "ipv4", 00:23:01.587 "trsvcid": "$NVMF_PORT", 00:23:01.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.587 "hdgst": ${hdgst:-false}, 00:23:01.587 "ddgst": ${ddgst:-false} 00:23:01.587 }, 00:23:01.587 "method": "bdev_nvme_attach_controller" 00:23:01.587 } 00:23:01.587 EOF 00:23:01.587 )") 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:01.587 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:01.587 "params": { 00:23:01.587 "name": "Nvme1", 00:23:01.587 "trtype": "tcp", 00:23:01.587 "traddr": "10.0.0.2", 00:23:01.587 "adrfam": "ipv4", 00:23:01.587 "trsvcid": "4420", 00:23:01.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.587 "hdgst": false, 00:23:01.587 "ddgst": false 00:23:01.587 }, 00:23:01.587 "method": "bdev_nvme_attach_controller" 00:23:01.587 },{ 00:23:01.587 "params": { 00:23:01.587 "name": "Nvme2", 00:23:01.587 "trtype": "tcp", 00:23:01.587 "traddr": "10.0.0.2", 00:23:01.587 "adrfam": "ipv4", 00:23:01.587 "trsvcid": "4420", 00:23:01.587 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.587 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.587 "hdgst": false, 00:23:01.587 "ddgst": false 00:23:01.587 }, 00:23:01.587 "method": "bdev_nvme_attach_controller" 00:23:01.587 },{ 00:23:01.587 "params": { 00:23:01.587 "name": "Nvme3", 00:23:01.587 "trtype": "tcp", 00:23:01.587 "traddr": "10.0.0.2", 00:23:01.587 "adrfam": "ipv4", 00:23:01.587 "trsvcid": "4420", 00:23:01.587 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:01.587 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:01.587 "hdgst": false, 00:23:01.588 "ddgst": false 00:23:01.588 }, 00:23:01.588 "method": "bdev_nvme_attach_controller" 00:23:01.588 },{ 00:23:01.588 "params": { 00:23:01.588 "name": "Nvme4", 00:23:01.588 "trtype": "tcp", 00:23:01.588 "traddr": "10.0.0.2", 00:23:01.588 "adrfam": "ipv4", 00:23:01.588 "trsvcid": "4420", 00:23:01.588 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:01.588 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:01.588 "hdgst": false, 00:23:01.588 "ddgst": false 00:23:01.588 }, 00:23:01.588 "method": "bdev_nvme_attach_controller" 00:23:01.588 },{ 00:23:01.588 "params": { 00:23:01.588 "name": "Nvme5", 00:23:01.588 "trtype": "tcp", 00:23:01.588 "traddr": "10.0.0.2", 00:23:01.588 "adrfam": "ipv4", 00:23:01.588 "trsvcid": "4420", 00:23:01.588 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:01.588 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:01.588 "hdgst": false, 00:23:01.588 "ddgst": false 00:23:01.588 }, 00:23:01.588 "method": "bdev_nvme_attach_controller" 00:23:01.588 },{ 00:23:01.588 "params": { 00:23:01.588 "name": "Nvme6", 00:23:01.588 "trtype": "tcp", 00:23:01.588 "traddr": "10.0.0.2", 00:23:01.588 "adrfam": "ipv4", 00:23:01.588 "trsvcid": "4420", 00:23:01.588 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:01.588 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:01.588 "hdgst": false, 00:23:01.588 "ddgst": false 00:23:01.588 }, 00:23:01.588 "method": "bdev_nvme_attach_controller" 00:23:01.588 },{ 00:23:01.588 "params": { 00:23:01.588 "name": "Nvme7", 00:23:01.588 "trtype": "tcp", 00:23:01.588 "traddr": "10.0.0.2", 00:23:01.588 "adrfam": "ipv4", 00:23:01.588 "trsvcid": "4420", 00:23:01.588 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:01.588 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:01.588 "hdgst": false, 00:23:01.588 "ddgst": false 00:23:01.588 }, 00:23:01.588 "method": "bdev_nvme_attach_controller" 00:23:01.588 },{ 00:23:01.588 "params": { 00:23:01.588 "name": "Nvme8", 00:23:01.588 "trtype": "tcp", 00:23:01.588 "traddr": "10.0.0.2", 00:23:01.588 "adrfam": "ipv4", 00:23:01.588 "trsvcid": "4420", 00:23:01.588 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:01.588 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:01.588 "hdgst": false, 00:23:01.588 "ddgst": false 00:23:01.588 }, 00:23:01.588 "method": "bdev_nvme_attach_controller" 00:23:01.588 },{ 00:23:01.588 "params": { 00:23:01.588 "name": "Nvme9", 00:23:01.588 "trtype": "tcp", 00:23:01.588 "traddr": "10.0.0.2", 00:23:01.588 "adrfam": "ipv4", 00:23:01.588 "trsvcid": "4420", 00:23:01.588 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:01.588 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:01.588 "hdgst": false, 00:23:01.588 "ddgst": false 00:23:01.588 }, 00:23:01.588 "method": "bdev_nvme_attach_controller" 00:23:01.588 },{ 00:23:01.588 "params": { 00:23:01.588 "name": "Nvme10", 00:23:01.588 "trtype": "tcp", 00:23:01.588 "traddr": "10.0.0.2", 00:23:01.588 "adrfam": "ipv4", 00:23:01.588 "trsvcid": "4420", 00:23:01.588 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:01.588 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:01.588 "hdgst": false, 00:23:01.588 "ddgst": false 00:23:01.588 }, 00:23:01.588 "method": "bdev_nvme_attach_controller" 00:23:01.588 }' 00:23:01.588 [2024-11-20 07:23:36.340262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.849 [2024-11-20 07:23:36.376097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.792 Running I/O for 1 seconds... 00:23:04.180 1877.00 IOPS, 117.31 MiB/s 00:23:04.180 Latency(us) 00:23:04.180 [2024-11-20T06:23:38.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.180 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.180 Verification LBA range: start 0x0 length 0x400 00:23:04.180 Nvme1n1 : 1.15 227.54 14.22 0.00 0.00 272704.46 26978.99 232434.35 00:23:04.180 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.180 Verification LBA range: start 0x0 length 0x400 00:23:04.180 Nvme2n1 : 1.17 219.58 13.72 0.00 0.00 283940.69 18131.63 255153.49 00:23:04.180 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.180 Verification LBA range: start 0x0 length 0x400 00:23:04.180 Nvme3n1 : 1.09 244.46 15.28 0.00 0.00 240161.42 24685.23 242920.11 00:23:04.180 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.180 Verification LBA range: start 0x0 length 0x400 00:23:04.180 Nvme4n1 : 1.17 272.88 17.05 0.00 0.00 219470.34 5352.11 263891.63 00:23:04.180 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.180 Verification LBA range: start 0x0 length 0x400 00:23:04.180 Nvme5n1 : 1.17 218.87 13.68 0.00 0.00 270739.63 16711.68 256901.12 00:23:04.180 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.180 Verification LBA range: start 0x0 length 0x400 00:23:04.180 Nvme6n1 : 1.15 222.78 13.92 0.00 0.00 260120.11 18022.40 251658.24 00:23:04.180 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.180 Verification LBA range: start 0x0 length 0x400 00:23:04.180 Nvme7n1 : 1.16 224.85 14.05 0.00 0.00 253408.47 3058.35 251658.24 00:23:04.180 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.180 Verification LBA range: start 0x0 length 0x400 00:23:04.180 Nvme8n1 : 1.19 269.66 16.85 0.00 0.00 208552.45 18896.21 246415.36 00:23:04.180 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.180 Verification LBA range: start 0x0 length 0x400 00:23:04.180 Nvme9n1 : 1.18 271.76 16.98 0.00 0.00 202961.75 15073.28 255153.49 00:23:04.180 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.180 Verification LBA range: start 0x0 length 0x400 00:23:04.180 Nvme10n1 : 1.18 220.52 13.78 0.00 0.00 244861.51 3167.57 274377.39 00:23:04.180 [2024-11-20T06:23:38.947Z] =================================================================================================================== 00:23:04.180 [2024-11-20T06:23:38.947Z] Total : 2392.89 149.56 0.00 0.00 243282.82 3058.35 274377.39 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.180 rmmod nvme_tcp 00:23:04.180 rmmod nvme_fabrics 00:23:04.180 rmmod nvme_keyring 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1351359 ']' 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1351359 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 1351359 ']' 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 1351359 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:23:04.180 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:04.445 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1351359 00:23:04.445 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:04.445 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:04.445 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1351359' 00:23:04.445 killing process with pid 1351359 00:23:04.445 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 1351359 00:23:04.445 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 1351359 00:23:04.708 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:04.708 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:04.708 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:04.708 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:04.708 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:04.708 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:04.708 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:04.708 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:04.708 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:04.708 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.708 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.708 07:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:06.625 00:23:06.625 real 0m17.519s 00:23:06.625 user 0m33.052s 00:23:06.625 sys 0m7.507s 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:06.625 ************************************ 00:23:06.625 END TEST nvmf_shutdown_tc1 00:23:06.625 ************************************ 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:06.625 ************************************ 00:23:06.625 START TEST nvmf_shutdown_tc2 00:23:06.625 ************************************ 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.625 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:06.888 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:06.888 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:06.888 Found net devices under 0000:31:00.0: cvl_0_0 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:06.888 Found net devices under 0000:31:00.1: cvl_0_1 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.888 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.889 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.889 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:06.889 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:23:07.150 00:23:07.150 --- 10.0.0.2 ping statistics --- 00:23:07.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.150 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:23:07.150 00:23:07.150 --- 10.0.0.1 ping statistics --- 00:23:07.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.150 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1353229 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1353229 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1353229 ']' 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:07.150 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.150 [2024-11-20 07:23:41.856833] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:23:07.150 [2024-11-20 07:23:41.856908] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.411 [2024-11-20 07:23:41.966032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.411 [2024-11-20 07:23:42.000829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.411 [2024-11-20 07:23:42.000867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.411 [2024-11-20 07:23:42.000873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.411 [2024-11-20 07:23:42.000878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.411 [2024-11-20 07:23:42.000882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.411 [2024-11-20 07:23:42.002206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.411 [2024-11-20 07:23:42.002367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.411 [2024-11-20 07:23:42.002528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.411 [2024-11-20 07:23:42.002530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.984 [2024-11-20 07:23:42.710784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:07.984 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:08.245 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.245 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:08.245 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.245 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:08.245 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.245 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:08.245 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.245 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:08.245 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.245 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:08.245 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:08.245 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.246 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:08.246 Malloc1 00:23:08.246 [2024-11-20 07:23:42.827568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.246 Malloc2 00:23:08.246 Malloc3 00:23:08.246 Malloc4 00:23:08.246 Malloc5 00:23:08.246 Malloc6 00:23:08.508 Malloc7 00:23:08.508 Malloc8 00:23:08.508 Malloc9 00:23:08.508 Malloc10 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1353615 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1353615 /var/tmp/bdevperf.sock 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1353615 ']' 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.508 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.508 { 00:23:08.508 "params": { 00:23:08.508 "name": "Nvme$subsystem", 00:23:08.508 "trtype": "$TEST_TRANSPORT", 00:23:08.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.508 "adrfam": "ipv4", 00:23:08.509 "trsvcid": "$NVMF_PORT", 00:23:08.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.509 "hdgst": ${hdgst:-false}, 00:23:08.509 "ddgst": ${ddgst:-false} 00:23:08.509 }, 00:23:08.509 "method": "bdev_nvme_attach_controller" 00:23:08.509 } 00:23:08.509 EOF 00:23:08.509 )") 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.509 { 00:23:08.509 "params": { 00:23:08.509 "name": "Nvme$subsystem", 00:23:08.509 "trtype": "$TEST_TRANSPORT", 00:23:08.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.509 "adrfam": "ipv4", 00:23:08.509 "trsvcid": "$NVMF_PORT", 00:23:08.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.509 "hdgst": ${hdgst:-false}, 00:23:08.509 "ddgst": ${ddgst:-false} 00:23:08.509 }, 00:23:08.509 "method": "bdev_nvme_attach_controller" 00:23:08.509 } 00:23:08.509 EOF 00:23:08.509 )") 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.509 { 00:23:08.509 "params": { 00:23:08.509 "name": "Nvme$subsystem", 00:23:08.509 "trtype": "$TEST_TRANSPORT", 00:23:08.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.509 "adrfam": "ipv4", 00:23:08.509 "trsvcid": "$NVMF_PORT", 00:23:08.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.509 "hdgst": ${hdgst:-false}, 00:23:08.509 "ddgst": ${ddgst:-false} 00:23:08.509 }, 00:23:08.509 "method": "bdev_nvme_attach_controller" 00:23:08.509 } 00:23:08.509 EOF 00:23:08.509 )") 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.509 { 00:23:08.509 "params": { 00:23:08.509 "name": "Nvme$subsystem", 00:23:08.509 "trtype": "$TEST_TRANSPORT", 00:23:08.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.509 "adrfam": "ipv4", 00:23:08.509 "trsvcid": "$NVMF_PORT", 00:23:08.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.509 "hdgst": ${hdgst:-false}, 00:23:08.509 "ddgst": ${ddgst:-false} 00:23:08.509 }, 00:23:08.509 "method": "bdev_nvme_attach_controller" 00:23:08.509 } 00:23:08.509 EOF 00:23:08.509 )") 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.509 { 00:23:08.509 "params": { 00:23:08.509 "name": "Nvme$subsystem", 00:23:08.509 "trtype": "$TEST_TRANSPORT", 00:23:08.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.509 "adrfam": "ipv4", 00:23:08.509 "trsvcid": "$NVMF_PORT", 00:23:08.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.509 "hdgst": ${hdgst:-false}, 00:23:08.509 "ddgst": ${ddgst:-false} 00:23:08.509 }, 00:23:08.509 "method": "bdev_nvme_attach_controller" 00:23:08.509 } 00:23:08.509 EOF 00:23:08.509 )") 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.509 { 00:23:08.509 "params": { 00:23:08.509 "name": "Nvme$subsystem", 00:23:08.509 "trtype": "$TEST_TRANSPORT", 00:23:08.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.509 "adrfam": "ipv4", 00:23:08.509 "trsvcid": "$NVMF_PORT", 00:23:08.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.509 "hdgst": ${hdgst:-false}, 00:23:08.509 "ddgst": ${ddgst:-false} 00:23:08.509 }, 00:23:08.509 "method": "bdev_nvme_attach_controller" 00:23:08.509 } 00:23:08.509 EOF 00:23:08.509 )") 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.509 { 00:23:08.509 "params": { 00:23:08.509 "name": "Nvme$subsystem", 00:23:08.509 "trtype": "$TEST_TRANSPORT", 00:23:08.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.509 "adrfam": "ipv4", 00:23:08.509 "trsvcid": "$NVMF_PORT", 00:23:08.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.509 "hdgst": ${hdgst:-false}, 00:23:08.509 "ddgst": ${ddgst:-false} 00:23:08.509 }, 00:23:08.509 "method": "bdev_nvme_attach_controller" 00:23:08.509 } 00:23:08.509 EOF 00:23:08.509 )") 00:23:08.509 [2024-11-20 07:23:43.268796] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:23:08.509 [2024-11-20 07:23:43.268847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353615 ] 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.509 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.509 { 00:23:08.509 "params": { 00:23:08.509 "name": "Nvme$subsystem", 00:23:08.509 "trtype": "$TEST_TRANSPORT", 00:23:08.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.509 "adrfam": "ipv4", 00:23:08.509 "trsvcid": "$NVMF_PORT", 00:23:08.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.509 "hdgst": ${hdgst:-false}, 00:23:08.509 "ddgst": ${ddgst:-false} 00:23:08.509 }, 00:23:08.509 "method": "bdev_nvme_attach_controller" 00:23:08.509 } 00:23:08.509 EOF 00:23:08.509 )") 00:23:08.771 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:08.771 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.771 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.771 { 00:23:08.771 "params": { 00:23:08.771 "name": "Nvme$subsystem", 00:23:08.771 "trtype": "$TEST_TRANSPORT", 00:23:08.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.771 "adrfam": "ipv4", 00:23:08.771 "trsvcid": "$NVMF_PORT", 00:23:08.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.771 "hdgst": ${hdgst:-false}, 00:23:08.771 "ddgst": ${ddgst:-false} 00:23:08.771 }, 00:23:08.771 "method": "bdev_nvme_attach_controller" 00:23:08.771 } 00:23:08.771 EOF 00:23:08.771 )") 00:23:08.771 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:08.771 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.771 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.771 { 00:23:08.771 "params": { 00:23:08.771 "name": "Nvme$subsystem", 00:23:08.771 "trtype": "$TEST_TRANSPORT", 00:23:08.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.771 "adrfam": "ipv4", 00:23:08.771 "trsvcid": "$NVMF_PORT", 00:23:08.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.771 "hdgst": ${hdgst:-false}, 00:23:08.771 "ddgst": ${ddgst:-false} 00:23:08.771 }, 00:23:08.771 "method": "bdev_nvme_attach_controller" 00:23:08.771 } 00:23:08.771 EOF 00:23:08.771 )") 00:23:08.771 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:08.771 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:08.771 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:08.771 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:08.771 "params": { 00:23:08.771 "name": "Nvme1", 00:23:08.771 "trtype": "tcp", 00:23:08.771 "traddr": "10.0.0.2", 00:23:08.771 "adrfam": "ipv4", 00:23:08.771 "trsvcid": "4420", 00:23:08.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.771 "hdgst": false, 00:23:08.771 "ddgst": false 00:23:08.771 }, 00:23:08.771 "method": "bdev_nvme_attach_controller" 00:23:08.771 },{ 00:23:08.771 "params": { 00:23:08.771 "name": "Nvme2", 00:23:08.771 "trtype": "tcp", 00:23:08.771 "traddr": "10.0.0.2", 00:23:08.771 "adrfam": "ipv4", 00:23:08.771 "trsvcid": "4420", 00:23:08.771 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:08.771 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:08.771 "hdgst": false, 00:23:08.771 "ddgst": false 00:23:08.771 }, 00:23:08.771 "method": "bdev_nvme_attach_controller" 00:23:08.771 },{ 00:23:08.771 "params": { 00:23:08.771 "name": "Nvme3", 00:23:08.771 "trtype": "tcp", 00:23:08.771 "traddr": "10.0.0.2", 00:23:08.771 "adrfam": "ipv4", 00:23:08.771 "trsvcid": "4420", 00:23:08.771 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:08.771 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:08.772 "hdgst": false, 00:23:08.772 "ddgst": false 00:23:08.772 }, 00:23:08.772 "method": "bdev_nvme_attach_controller" 00:23:08.772 },{ 00:23:08.772 "params": { 00:23:08.772 "name": "Nvme4", 00:23:08.772 "trtype": "tcp", 00:23:08.772 "traddr": "10.0.0.2", 00:23:08.772 "adrfam": "ipv4", 00:23:08.772 "trsvcid": "4420", 00:23:08.772 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:08.772 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:08.772 "hdgst": false, 00:23:08.772 "ddgst": false 00:23:08.772 }, 00:23:08.772 "method": "bdev_nvme_attach_controller" 00:23:08.772 },{ 00:23:08.772 "params": { 00:23:08.772 "name": "Nvme5", 00:23:08.772 "trtype": "tcp", 00:23:08.772 "traddr": "10.0.0.2", 00:23:08.772 "adrfam": "ipv4", 00:23:08.772 "trsvcid": "4420", 00:23:08.772 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:08.772 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:08.772 "hdgst": false, 00:23:08.772 "ddgst": false 00:23:08.772 }, 00:23:08.772 "method": "bdev_nvme_attach_controller" 00:23:08.772 },{ 00:23:08.772 "params": { 00:23:08.772 "name": "Nvme6", 00:23:08.772 "trtype": "tcp", 00:23:08.772 "traddr": "10.0.0.2", 00:23:08.772 "adrfam": "ipv4", 00:23:08.772 "trsvcid": "4420", 00:23:08.772 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:08.772 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:08.772 "hdgst": false, 00:23:08.772 "ddgst": false 00:23:08.772 }, 00:23:08.772 "method": "bdev_nvme_attach_controller" 00:23:08.772 },{ 00:23:08.772 "params": { 00:23:08.772 "name": "Nvme7", 00:23:08.772 "trtype": "tcp", 00:23:08.772 "traddr": "10.0.0.2", 00:23:08.772 "adrfam": "ipv4", 00:23:08.772 "trsvcid": "4420", 00:23:08.772 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:08.772 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:08.772 "hdgst": false, 00:23:08.772 "ddgst": false 00:23:08.772 }, 00:23:08.772 "method": "bdev_nvme_attach_controller" 00:23:08.772 },{ 00:23:08.772 "params": { 00:23:08.772 "name": "Nvme8", 00:23:08.772 "trtype": "tcp", 00:23:08.772 "traddr": "10.0.0.2", 00:23:08.772 "adrfam": "ipv4", 00:23:08.772 "trsvcid": "4420", 00:23:08.772 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:08.772 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:08.772 "hdgst": false, 00:23:08.772 "ddgst": false 00:23:08.772 }, 00:23:08.772 "method": "bdev_nvme_attach_controller" 00:23:08.772 },{ 00:23:08.772 "params": { 00:23:08.772 "name": "Nvme9", 00:23:08.772 "trtype": "tcp", 00:23:08.772 "traddr": "10.0.0.2", 00:23:08.772 "adrfam": "ipv4", 00:23:08.772 "trsvcid": "4420", 00:23:08.772 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:08.772 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:08.772 "hdgst": false, 00:23:08.772 "ddgst": false 00:23:08.772 }, 00:23:08.772 "method": "bdev_nvme_attach_controller" 00:23:08.772 },{ 00:23:08.772 "params": { 00:23:08.772 "name": "Nvme10", 00:23:08.772 "trtype": "tcp", 00:23:08.772 "traddr": "10.0.0.2", 00:23:08.772 "adrfam": "ipv4", 00:23:08.772 "trsvcid": "4420", 00:23:08.772 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:08.772 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:08.772 "hdgst": false, 00:23:08.772 "ddgst": false 00:23:08.772 }, 00:23:08.772 "method": "bdev_nvme_attach_controller" 00:23:08.772 }' 00:23:08.772 [2024-11-20 07:23:43.347580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.772 [2024-11-20 07:23:43.383939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.688 Running I/O for 10 seconds... 00:23:10.688 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:10.688 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:23:10.688 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:10.688 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.688 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.688 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.688 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:10.688 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:10.688 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:10.688 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:10.688 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:10.688 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:10.688 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:10.689 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:10.689 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:10.689 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.689 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.689 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.689 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:10.689 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:10.689 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:10.949 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:10.949 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:10.949 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:10.949 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:10.949 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.949 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.949 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.949 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:10.949 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:10.949 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1353615 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1353615 ']' 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1353615 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1353615 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1353615' 00:23:11.210 killing process with pid 1353615 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1353615 00:23:11.210 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1353615 00:23:11.210 Received shutdown signal, test time was about 0.979960 seconds 00:23:11.210 00:23:11.210 Latency(us) 00:23:11.210 [2024-11-20T06:23:45.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.210 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.210 Verification LBA range: start 0x0 length 0x400 00:23:11.210 Nvme1n1 : 0.98 261.47 16.34 0.00 0.00 241953.71 17476.27 251658.24 00:23:11.210 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.210 Verification LBA range: start 0x0 length 0x400 00:23:11.210 Nvme2n1 : 0.97 263.03 16.44 0.00 0.00 235788.80 21080.75 260396.37 00:23:11.210 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.210 Verification LBA range: start 0x0 length 0x400 00:23:11.210 Nvme3n1 : 0.97 268.00 16.75 0.00 0.00 226207.53 3426.99 249910.61 00:23:11.210 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.210 Verification LBA range: start 0x0 length 0x400 00:23:11.210 Nvme4n1 : 0.96 266.03 16.63 0.00 0.00 223490.13 22609.92 239424.85 00:23:11.210 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.210 Verification LBA range: start 0x0 length 0x400 00:23:11.210 Nvme5n1 : 0.96 210.45 13.15 0.00 0.00 272152.03 7427.41 248162.99 00:23:11.210 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.210 Verification LBA range: start 0x0 length 0x400 00:23:11.210 Nvme6n1 : 0.95 202.68 12.67 0.00 0.00 280473.32 15619.41 260396.37 00:23:11.210 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.210 Verification LBA range: start 0x0 length 0x400 00:23:11.210 Nvme7n1 : 0.98 262.23 16.39 0.00 0.00 212860.16 25777.49 225443.84 00:23:11.210 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.210 Verification LBA range: start 0x0 length 0x400 00:23:11.211 Nvme8n1 : 0.95 268.38 16.77 0.00 0.00 202466.56 17039.36 241172.48 00:23:11.211 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.211 Verification LBA range: start 0x0 length 0x400 00:23:11.211 Nvme9n1 : 0.97 198.39 12.40 0.00 0.00 268227.70 20753.07 314572.80 00:23:11.211 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.211 Verification LBA range: start 0x0 length 0x400 00:23:11.211 Nvme10n1 : 0.96 200.35 12.52 0.00 0.00 259315.20 21845.33 276125.01 00:23:11.211 [2024-11-20T06:23:45.978Z] =================================================================================================================== 00:23:11.211 [2024-11-20T06:23:45.978Z] Total : 2401.01 150.06 0.00 0.00 239330.02 3426.99 314572.80 00:23:11.471 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:12.412 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1353229 00:23:12.412 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:12.412 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:12.412 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:12.412 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:12.412 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:12.412 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.412 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:12.412 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.412 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:12.412 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.412 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.412 rmmod nvme_tcp 00:23:12.412 rmmod nvme_fabrics 00:23:12.412 rmmod nvme_keyring 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1353229 ']' 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1353229 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1353229 ']' 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1353229 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1353229 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1353229' 00:23:12.673 killing process with pid 1353229 00:23:12.673 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1353229 00:23:12.674 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1353229 00:23:12.936 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:12.936 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:12.936 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:12.936 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:12.936 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:12.936 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:12.936 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:12.936 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.936 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.936 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.936 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.936 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.850 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.850 00:23:14.850 real 0m8.179s 00:23:14.850 user 0m24.991s 00:23:14.850 sys 0m1.339s 00:23:14.850 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:14.850 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:14.850 ************************************ 00:23:14.850 END TEST nvmf_shutdown_tc2 00:23:14.850 ************************************ 00:23:14.850 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:14.850 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:14.850 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:14.850 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:15.111 ************************************ 00:23:15.112 START TEST nvmf_shutdown_tc3 00:23:15.112 ************************************ 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:15.112 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:15.112 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:15.112 Found net devices under 0000:31:00.0: cvl_0_0 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:15.112 Found net devices under 0000:31:00.1: cvl_0_1 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.112 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.113 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.374 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.374 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.374 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.374 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:23:15.374 00:23:15.374 --- 10.0.0.2 ping statistics --- 00:23:15.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.374 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:23:15.374 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:23:15.374 00:23:15.374 --- 10.0.0.1 ping statistics --- 00:23:15.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.374 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:23:15.374 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.374 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:15.374 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:15.374 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.374 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:15.374 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:15.375 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.375 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:15.375 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:15.375 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:15.375 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.375 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:15.375 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:15.375 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1355072 00:23:15.375 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1355072 00:23:15.375 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:15.375 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1355072 ']' 00:23:15.375 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.375 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:15.375 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.375 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:15.375 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:15.375 [2024-11-20 07:23:50.092259] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:23:15.375 [2024-11-20 07:23:50.092320] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.636 [2024-11-20 07:23:50.190524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:15.636 [2024-11-20 07:23:50.222955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.636 [2024-11-20 07:23:50.222985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.636 [2024-11-20 07:23:50.222990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.636 [2024-11-20 07:23:50.222995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.636 [2024-11-20 07:23:50.222999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.636 [2024-11-20 07:23:50.224351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.636 [2024-11-20 07:23:50.224508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.636 [2024-11-20 07:23:50.224662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.636 [2024-11-20 07:23:50.224663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.209 [2024-11-20 07:23:50.935614] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.209 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:16.470 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.470 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:16.470 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.470 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:16.470 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.470 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:16.470 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.470 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:16.470 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:16.470 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:16.470 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:16.470 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.470 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.470 Malloc1 00:23:16.470 [2024-11-20 07:23:51.047068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.470 Malloc2 00:23:16.470 Malloc3 00:23:16.470 Malloc4 00:23:16.470 Malloc5 00:23:16.470 Malloc6 00:23:16.732 Malloc7 00:23:16.732 Malloc8 00:23:16.732 Malloc9 00:23:16.732 Malloc10 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1355456 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1355456 /var/tmp/bdevperf.sock 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1355456 ']' 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:16.732 { 00:23:16.732 "params": { 00:23:16.732 "name": "Nvme$subsystem", 00:23:16.732 "trtype": "$TEST_TRANSPORT", 00:23:16.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.732 "adrfam": "ipv4", 00:23:16.732 "trsvcid": "$NVMF_PORT", 00:23:16.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.732 "hdgst": ${hdgst:-false}, 00:23:16.732 "ddgst": ${ddgst:-false} 00:23:16.732 }, 00:23:16.732 "method": "bdev_nvme_attach_controller" 00:23:16.732 } 00:23:16.732 EOF 00:23:16.732 )") 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:16.732 { 00:23:16.732 "params": { 00:23:16.732 "name": "Nvme$subsystem", 00:23:16.732 "trtype": "$TEST_TRANSPORT", 00:23:16.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.732 "adrfam": "ipv4", 00:23:16.732 "trsvcid": "$NVMF_PORT", 00:23:16.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.732 "hdgst": ${hdgst:-false}, 00:23:16.732 "ddgst": ${ddgst:-false} 00:23:16.732 }, 00:23:16.732 "method": "bdev_nvme_attach_controller" 00:23:16.732 } 00:23:16.732 EOF 00:23:16.732 )") 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:16.732 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:16.732 { 00:23:16.732 "params": { 00:23:16.732 "name": "Nvme$subsystem", 00:23:16.732 "trtype": "$TEST_TRANSPORT", 00:23:16.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.732 "adrfam": "ipv4", 00:23:16.732 "trsvcid": "$NVMF_PORT", 00:23:16.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.732 "hdgst": ${hdgst:-false}, 00:23:16.732 "ddgst": ${ddgst:-false} 00:23:16.732 }, 00:23:16.732 "method": "bdev_nvme_attach_controller" 00:23:16.732 } 00:23:16.733 EOF 00:23:16.733 )") 00:23:16.733 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:16.733 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:16.733 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:16.733 { 00:23:16.733 "params": { 00:23:16.733 "name": "Nvme$subsystem", 00:23:16.733 "trtype": "$TEST_TRANSPORT", 00:23:16.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.733 "adrfam": "ipv4", 00:23:16.733 "trsvcid": "$NVMF_PORT", 00:23:16.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.733 "hdgst": ${hdgst:-false}, 00:23:16.733 "ddgst": ${ddgst:-false} 00:23:16.733 }, 00:23:16.733 "method": "bdev_nvme_attach_controller" 00:23:16.733 } 00:23:16.733 EOF 00:23:16.733 )") 00:23:16.733 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:16.733 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:16.733 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:16.733 { 00:23:16.733 "params": { 00:23:16.733 "name": "Nvme$subsystem", 00:23:16.733 "trtype": "$TEST_TRANSPORT", 00:23:16.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.733 "adrfam": "ipv4", 00:23:16.733 "trsvcid": "$NVMF_PORT", 00:23:16.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.733 "hdgst": ${hdgst:-false}, 00:23:16.733 "ddgst": ${ddgst:-false} 00:23:16.733 }, 00:23:16.733 "method": "bdev_nvme_attach_controller" 00:23:16.733 } 00:23:16.733 EOF 00:23:16.733 )") 00:23:16.733 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:16.733 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:16.733 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:16.733 { 00:23:16.733 "params": { 00:23:16.733 "name": "Nvme$subsystem", 00:23:16.733 "trtype": "$TEST_TRANSPORT", 00:23:16.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.733 "adrfam": "ipv4", 00:23:16.733 "trsvcid": "$NVMF_PORT", 00:23:16.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.733 "hdgst": ${hdgst:-false}, 00:23:16.733 "ddgst": ${ddgst:-false} 00:23:16.733 }, 00:23:16.733 "method": "bdev_nvme_attach_controller" 00:23:16.733 } 00:23:16.733 EOF 00:23:16.733 )") 00:23:16.733 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:16.733 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:16.733 [2024-11-20 07:23:51.495566] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:23:16.733 [2024-11-20 07:23:51.495618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355456 ] 00:23:16.733 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:16.733 { 00:23:16.733 "params": { 00:23:16.733 "name": "Nvme$subsystem", 00:23:16.733 "trtype": "$TEST_TRANSPORT", 00:23:16.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.733 "adrfam": "ipv4", 00:23:16.733 "trsvcid": "$NVMF_PORT", 00:23:16.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.733 "hdgst": ${hdgst:-false}, 00:23:16.733 "ddgst": ${ddgst:-false} 00:23:16.733 }, 00:23:16.733 "method": "bdev_nvme_attach_controller" 00:23:16.733 } 00:23:16.733 EOF 00:23:16.733 )") 00:23:16.994 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:16.994 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:16.994 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:16.994 { 00:23:16.994 "params": { 00:23:16.994 "name": "Nvme$subsystem", 00:23:16.994 "trtype": "$TEST_TRANSPORT", 00:23:16.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.994 "adrfam": "ipv4", 00:23:16.994 "trsvcid": "$NVMF_PORT", 00:23:16.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.994 "hdgst": ${hdgst:-false}, 00:23:16.994 "ddgst": ${ddgst:-false} 00:23:16.995 }, 00:23:16.995 "method": "bdev_nvme_attach_controller" 00:23:16.995 } 00:23:16.995 EOF 00:23:16.995 )") 00:23:16.995 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:16.995 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:16.995 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:16.995 { 00:23:16.995 "params": { 00:23:16.995 "name": "Nvme$subsystem", 00:23:16.995 "trtype": "$TEST_TRANSPORT", 00:23:16.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.995 "adrfam": "ipv4", 00:23:16.995 "trsvcid": "$NVMF_PORT", 00:23:16.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.995 "hdgst": ${hdgst:-false}, 00:23:16.995 "ddgst": ${ddgst:-false} 00:23:16.995 }, 00:23:16.995 "method": "bdev_nvme_attach_controller" 00:23:16.995 } 00:23:16.995 EOF 00:23:16.995 )") 00:23:16.995 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:16.995 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:16.995 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:16.995 { 00:23:16.995 "params": { 00:23:16.995 "name": "Nvme$subsystem", 00:23:16.995 "trtype": "$TEST_TRANSPORT", 00:23:16.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:16.995 "adrfam": "ipv4", 00:23:16.995 "trsvcid": "$NVMF_PORT", 00:23:16.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:16.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:16.995 "hdgst": ${hdgst:-false}, 00:23:16.995 "ddgst": ${ddgst:-false} 00:23:16.995 }, 00:23:16.995 "method": "bdev_nvme_attach_controller" 00:23:16.995 } 00:23:16.995 EOF 00:23:16.995 )") 00:23:16.995 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:16.995 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:16.995 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:16.995 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:16.995 "params": { 00:23:16.995 "name": "Nvme1", 00:23:16.995 "trtype": "tcp", 00:23:16.995 "traddr": "10.0.0.2", 00:23:16.995 "adrfam": "ipv4", 00:23:16.995 "trsvcid": "4420", 00:23:16.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.995 "hdgst": false, 00:23:16.995 "ddgst": false 00:23:16.995 }, 00:23:16.995 "method": "bdev_nvme_attach_controller" 00:23:16.995 },{ 00:23:16.995 "params": { 00:23:16.995 "name": "Nvme2", 00:23:16.995 "trtype": "tcp", 00:23:16.995 "traddr": "10.0.0.2", 00:23:16.995 "adrfam": "ipv4", 00:23:16.995 "trsvcid": "4420", 00:23:16.995 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:16.995 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:16.995 "hdgst": false, 00:23:16.995 "ddgst": false 00:23:16.995 }, 00:23:16.995 "method": "bdev_nvme_attach_controller" 00:23:16.995 },{ 00:23:16.995 "params": { 00:23:16.995 "name": "Nvme3", 00:23:16.995 "trtype": "tcp", 00:23:16.995 "traddr": "10.0.0.2", 00:23:16.995 "adrfam": "ipv4", 00:23:16.995 "trsvcid": "4420", 00:23:16.995 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:16.995 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:16.995 "hdgst": false, 00:23:16.995 "ddgst": false 00:23:16.995 }, 00:23:16.995 "method": "bdev_nvme_attach_controller" 00:23:16.995 },{ 00:23:16.995 "params": { 00:23:16.995 "name": "Nvme4", 00:23:16.995 "trtype": "tcp", 00:23:16.995 "traddr": "10.0.0.2", 00:23:16.995 "adrfam": "ipv4", 00:23:16.995 "trsvcid": "4420", 00:23:16.995 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:16.995 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:16.995 "hdgst": false, 00:23:16.995 "ddgst": false 00:23:16.995 }, 00:23:16.995 "method": "bdev_nvme_attach_controller" 00:23:16.995 },{ 00:23:16.995 "params": { 00:23:16.995 "name": "Nvme5", 00:23:16.995 "trtype": "tcp", 00:23:16.995 "traddr": "10.0.0.2", 00:23:16.995 "adrfam": "ipv4", 00:23:16.995 "trsvcid": "4420", 00:23:16.995 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:16.995 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:16.995 "hdgst": false, 00:23:16.995 "ddgst": false 00:23:16.995 }, 00:23:16.995 "method": "bdev_nvme_attach_controller" 00:23:16.995 },{ 00:23:16.995 "params": { 00:23:16.995 "name": "Nvme6", 00:23:16.995 "trtype": "tcp", 00:23:16.995 "traddr": "10.0.0.2", 00:23:16.995 "adrfam": "ipv4", 00:23:16.995 "trsvcid": "4420", 00:23:16.995 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:16.995 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:16.995 "hdgst": false, 00:23:16.995 "ddgst": false 00:23:16.995 }, 00:23:16.995 "method": "bdev_nvme_attach_controller" 00:23:16.995 },{ 00:23:16.995 "params": { 00:23:16.995 "name": "Nvme7", 00:23:16.995 "trtype": "tcp", 00:23:16.995 "traddr": "10.0.0.2", 00:23:16.995 "adrfam": "ipv4", 00:23:16.995 "trsvcid": "4420", 00:23:16.995 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:16.995 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:16.995 "hdgst": false, 00:23:16.995 "ddgst": false 00:23:16.995 }, 00:23:16.995 "method": "bdev_nvme_attach_controller" 00:23:16.995 },{ 00:23:16.995 "params": { 00:23:16.995 "name": "Nvme8", 00:23:16.995 "trtype": "tcp", 00:23:16.995 "traddr": "10.0.0.2", 00:23:16.995 "adrfam": "ipv4", 00:23:16.995 "trsvcid": "4420", 00:23:16.995 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:16.995 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:16.995 "hdgst": false, 00:23:16.995 "ddgst": false 00:23:16.995 }, 00:23:16.995 "method": "bdev_nvme_attach_controller" 00:23:16.995 },{ 00:23:16.995 "params": { 00:23:16.995 "name": "Nvme9", 00:23:16.995 "trtype": "tcp", 00:23:16.995 "traddr": "10.0.0.2", 00:23:16.995 "adrfam": "ipv4", 00:23:16.995 "trsvcid": "4420", 00:23:16.995 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:16.995 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:16.995 "hdgst": false, 00:23:16.995 "ddgst": false 00:23:16.995 }, 00:23:16.995 "method": "bdev_nvme_attach_controller" 00:23:16.995 },{ 00:23:16.995 "params": { 00:23:16.995 "name": "Nvme10", 00:23:16.995 "trtype": "tcp", 00:23:16.995 "traddr": "10.0.0.2", 00:23:16.995 "adrfam": "ipv4", 00:23:16.995 "trsvcid": "4420", 00:23:16.995 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:16.995 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:16.995 "hdgst": false, 00:23:16.995 "ddgst": false 00:23:16.995 }, 00:23:16.995 "method": "bdev_nvme_attach_controller" 00:23:16.995 }' 00:23:16.995 [2024-11-20 07:23:51.574288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.995 [2024-11-20 07:23:51.611425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.383 Running I/O for 10 seconds... 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:18.644 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:18.906 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:18.906 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:18.906 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:18.906 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:18.906 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.906 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.167 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.167 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:19.167 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:19.167 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:19.445 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:19.445 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:19.445 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:19.445 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:19.445 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.445 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.445 07:23:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1355072 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1355072 ']' 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1355072 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1355072 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1355072' 00:23:19.445 killing process with pid 1355072 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 1355072 00:23:19.445 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 1355072 00:23:19.445 [2024-11-20 07:23:54.080712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.445 [2024-11-20 07:23:54.080761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.445 [2024-11-20 07:23:54.080768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.445 [2024-11-20 07:23:54.080773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.445 [2024-11-20 07:23:54.080778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.445 [2024-11-20 07:23:54.080788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.445 [2024-11-20 07:23:54.080793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.445 [2024-11-20 07:23:54.080798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.445 [2024-11-20 07:23:54.080803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.445 [2024-11-20 07:23:54.080808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.445 [2024-11-20 07:23:54.080813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.445 [2024-11-20 07:23:54.080817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.080998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.081068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd020 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.446 [2024-11-20 07:23:54.082398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.082555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b6ae0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.083382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd4f0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.083395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd4f0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.083400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd4f0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.447 [2024-11-20 07:23:54.084645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.084719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd9c0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.085971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cdeb0 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.086524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ce230 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.086540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ce230 is same with the state(6) to be set 00:23:19.448 [2024-11-20 07:23:54.086546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ce230 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.087996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cebd0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.449 [2024-11-20 07:23:54.088917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.088999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf0a0 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.450 [2024-11-20 07:23:54.089727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.089858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.451 [2024-11-20 07:23:54.097932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.097973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.097992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.451 [2024-11-20 07:23:54.098429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.451 [2024-11-20 07:23:54.098436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.098988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.098998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.099005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.099015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.099022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.099031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.099039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.099048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.099055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.099065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.099074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.099104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:19.452 [2024-11-20 07:23:54.099565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.452 [2024-11-20 07:23:54.099584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.452 [2024-11-20 07:23:54.099597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.453 [2024-11-20 07:23:54.099745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.453 [2024-11-20 07:23:54.099755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-11-20 07:23:54.099761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 he state(6) to be set 00:23:19.453 [2024-11-20 07:23:54.099771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.453 [2024-11-20 07:23:54.099771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.453 [2024-11-20 07:23:54.099782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.453 [2024-11-20 07:23:54.099782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf570 is same with the state(6) to be set 00:23:19.453 [2024-11-20 07:23:54.099790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.099987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.099996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.453 [2024-11-20 07:23:54.100247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.453 [2024-11-20 07:23:54.100257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.454 [2024-11-20 07:23:54.100712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.100736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:19.454 [2024-11-20 07:23:54.100972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.454 [2024-11-20 07:23:54.100993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.101003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.454 [2024-11-20 07:23:54.101010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.101019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.454 [2024-11-20 07:23:54.101026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.101035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.454 [2024-11-20 07:23:54.101042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.101050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28ad540 is same with the state(6) to be set 00:23:19.454 [2024-11-20 07:23:54.101075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.454 [2024-11-20 07:23:54.101084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.101093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.454 [2024-11-20 07:23:54.101103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.454 [2024-11-20 07:23:54.101111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.454 [2024-11-20 07:23:54.101119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.101127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.101135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.101142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28ac0b0 is same with the state(6) to be set 00:23:19.455 [2024-11-20 07:23:54.101161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28dfc40 is same with the state(6) to be set 00:23:19.455 [2024-11-20 07:23:54.109585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28db480 is same with the state(6) to be set 00:23:19.455 [2024-11-20 07:23:54.109679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24783e0 is same with the state(6) to be set 00:23:19.455 [2024-11-20 07:23:54.109777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2481aa0 is same with the state(6) to be set 00:23:19.455 [2024-11-20 07:23:54.109886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28b00a0 is same with the state(6) to be set 00:23:19.455 [2024-11-20 07:23:54.109975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.109984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.109995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.110002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.110012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.110020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.110029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.110037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.110045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28db2a0 is same with the state(6) to be set 00:23:19.455 [2024-11-20 07:23:54.110071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.110080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.110089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.110097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.110106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.110113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.110122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.110130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.110137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246bb00 is same with the state(6) to be set 00:23:19.455 [2024-11-20 07:23:54.110161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.110171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.110179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.110187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.110195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.110204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.110212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.455 [2024-11-20 07:23:54.110219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.110229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d0a40 is same with the state(6) to be set 00:23:19.455 [2024-11-20 07:23:54.110335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.455 [2024-11-20 07:23:54.110349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.455 [2024-11-20 07:23:54.110364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.456 [2024-11-20 07:23:54.110958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.456 [2024-11-20 07:23:54.110966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.110976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.110983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.110993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.111471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.457 [2024-11-20 07:23:54.111478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.457 [2024-11-20 07:23:54.114192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:19.457 [2024-11-20 07:23:54.114222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:19.457 [2024-11-20 07:23:54.114239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28ac0b0 (9): Bad file descriptor 00:23:19.457 [2024-11-20 07:23:54.114253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28b00a0 (9): Bad file descriptor 00:23:19.457 [2024-11-20 07:23:54.114285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28ad540 (9): Bad file descriptor 00:23:19.457 [2024-11-20 07:23:54.114305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28dfc40 (9): Bad file descriptor 00:23:19.457 [2024-11-20 07:23:54.114320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28db480 (9): Bad file descriptor 00:23:19.457 [2024-11-20 07:23:54.114334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24783e0 (9): Bad file descriptor 00:23:19.457 [2024-11-20 07:23:54.114348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2481aa0 (9): Bad file descriptor 00:23:19.457 [2024-11-20 07:23:54.114366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28db2a0 (9): Bad file descriptor 00:23:19.457 [2024-11-20 07:23:54.114379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246bb00 (9): Bad file descriptor 00:23:19.457 [2024-11-20 07:23:54.114396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28d0a40 (9): Bad file descriptor 00:23:19.457 [2024-11-20 07:23:54.115889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:19.457 [2024-11-20 07:23:54.117251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.457 [2024-11-20 07:23:54.117294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28b00a0 with addr=10.0.0.2, port=4420 00:23:19.457 [2024-11-20 07:23:54.117305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28b00a0 is same with the state(6) to be set 00:23:19.457 [2024-11-20 07:23:54.117535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.457 [2024-11-20 07:23:54.117550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28ac0b0 with addr=10.0.0.2, port=4420 00:23:19.457 [2024-11-20 07:23:54.117558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28ac0b0 is same with the state(6) to be set 00:23:19.457 [2024-11-20 07:23:54.118100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.457 [2024-11-20 07:23:54.118140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24783e0 with addr=10.0.0.2, port=4420 00:23:19.457 [2024-11-20 07:23:54.118152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24783e0 is same with the state(6) to be set 00:23:19.457 [2024-11-20 07:23:54.118523] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.458 [2024-11-20 07:23:54.118574] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.458 [2024-11-20 07:23:54.118632] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.458 [2024-11-20 07:23:54.118681] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.458 [2024-11-20 07:23:54.118720] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.458 [2024-11-20 07:23:54.118757] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.458 [2024-11-20 07:23:54.118775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28b00a0 (9): Bad file descriptor 00:23:19.458 [2024-11-20 07:23:54.118788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28ac0b0 (9): Bad file descriptor 00:23:19.458 [2024-11-20 07:23:54.118799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24783e0 (9): Bad file descriptor 00:23:19.458 [2024-11-20 07:23:54.118852] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:19.458 [2024-11-20 07:23:54.118951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:19.458 [2024-11-20 07:23:54.118963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:19.458 [2024-11-20 07:23:54.118973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:19.458 [2024-11-20 07:23:54.118982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:19.458 [2024-11-20 07:23:54.118990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:19.458 [2024-11-20 07:23:54.118997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:19.458 [2024-11-20 07:23:54.119004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:19.458 [2024-11-20 07:23:54.119011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:19.458 [2024-11-20 07:23:54.119018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:19.458 [2024-11-20 07:23:54.119024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:19.458 [2024-11-20 07:23:54.119031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:19.458 [2024-11-20 07:23:54.119038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:19.458 [2024-11-20 07:23:54.124359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.458 [2024-11-20 07:23:54.124917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.458 [2024-11-20 07:23:54.124927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.124935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.124945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.124953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.124963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.124972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.124981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.124989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.124999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.125540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.125549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2926d30 is same with the state(6) to be set 00:23:19.459 [2024-11-20 07:23:54.126845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.126871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.126886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.126896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.126908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.126918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.126933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.126943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.126955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.126964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.459 [2024-11-20 07:23:54.126974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.459 [2024-11-20 07:23:54.126982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.126992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.460 [2024-11-20 07:23:54.127696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.460 [2024-11-20 07:23:54.127706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.127988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.127996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.128006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.128014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.128024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.128031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.128042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.128049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.128058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2685e20 is same with the state(6) to be set 00:23:19.461 [2024-11-20 07:23:54.129345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.461 [2024-11-20 07:23:54.129603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.461 [2024-11-20 07:23:54.129613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.129982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.129990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.462 [2024-11-20 07:23:54.130273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.462 [2024-11-20 07:23:54.130281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.130537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.130546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26872e0 is same with the state(6) to be set 00:23:19.463 [2024-11-20 07:23:54.131820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.131837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.131851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.131865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.131876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.131886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.131898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.131908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.131919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.131929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.131941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.131951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.131962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.131971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.131982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.131989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.463 [2024-11-20 07:23:54.132820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.463 [2024-11-20 07:23:54.132830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.132838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.132848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.132855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.132870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.132878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.132888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.132895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.132905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.132913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.132923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.132931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.132941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.132948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.132958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.132966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.132977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.132985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.132995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.133004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.133012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2885970 is same with the state(6) to be set 00:23:19.464 [2024-11-20 07:23:54.134283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.134985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.134996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.464 [2024-11-20 07:23:54.135335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.464 [2024-11-20 07:23:54.135345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.135352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.135363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.135371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.135380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.135389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.135401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.135410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.135420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.135427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.135439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.135446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.135457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.135465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.135473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2886f10 is same with the state(6) to be set 00:23:19.465 [2024-11-20 07:23:54.136759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.136773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.136787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.136797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.136809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.136818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.136830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.136839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.136849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.136857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.136870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.136878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.136887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.136895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.136905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.136914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.136928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.136936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.136946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.136954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.136965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.136972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.136983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.136990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.137930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.137939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2888460 is same with the state(6) to be set 00:23:19.465 [2024-11-20 07:23:54.139261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.139276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.139290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.139300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.139311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.139319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.139329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.465 [2024-11-20 07:23:54.139338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.465 [2024-11-20 07:23:54.139347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.139983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.139993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.466 [2024-11-20 07:23:54.140445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.466 [2024-11-20 07:23:54.140454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x270a7a0 is same with the state(6) to be set 00:23:19.466 [2024-11-20 07:23:54.142394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:19.466 [2024-11-20 07:23:54.142418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:19.466 [2024-11-20 07:23:54.142430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:19.466 [2024-11-20 07:23:54.142441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:19.466 [2024-11-20 07:23:54.142526] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:19.466 [2024-11-20 07:23:54.142540] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:19.466 [2024-11-20 07:23:54.142556] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:19.466 [2024-11-20 07:23:54.159423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:19.466 [2024-11-20 07:23:54.159444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:19.466 task offset: 27264 on job bdev=Nvme5n1 fails 00:23:19.466 00:23:19.466 Latency(us) 00:23:19.466 [2024-11-20T06:23:54.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.466 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.466 Job: Nvme1n1 ended in about 1.00 seconds with error 00:23:19.466 Verification LBA range: start 0x0 length 0x400 00:23:19.466 Nvme1n1 : 1.00 192.58 12.04 64.19 0.00 246560.53 13871.79 263891.63 00:23:19.466 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.466 Job: Nvme2n1 ended in about 0.99 seconds with error 00:23:19.466 Verification LBA range: start 0x0 length 0x400 00:23:19.466 Nvme2n1 : 0.99 194.76 12.17 64.92 0.00 239046.40 16493.23 251658.24 00:23:19.466 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.466 Job: Nvme3n1 ended in about 1.00 seconds with error 00:23:19.467 Verification LBA range: start 0x0 length 0x400 00:23:19.467 Nvme3n1 : 1.00 192.10 12.01 64.03 0.00 237750.29 12615.68 246415.36 00:23:19.467 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.467 Job: Nvme4n1 ended in about 1.00 seconds with error 00:23:19.467 Verification LBA range: start 0x0 length 0x400 00:23:19.467 Nvme4n1 : 1.00 191.63 11.98 63.88 0.00 233617.49 20643.84 241172.48 00:23:19.467 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.467 Job: Nvme5n1 ended in about 0.98 seconds with error 00:23:19.467 Verification LBA range: start 0x0 length 0x400 00:23:19.467 Nvme5n1 : 0.98 195.31 12.21 65.10 0.00 224217.60 13981.01 230686.72 00:23:19.467 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.467 Job: Nvme6n1 ended in about 0.98 seconds with error 00:23:19.467 Verification LBA range: start 0x0 length 0x400 00:23:19.467 Nvme6n1 : 0.98 195.08 12.19 65.03 0.00 219830.08 13598.72 241172.48 00:23:19.467 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.467 Job: Nvme7n1 ended in about 1.00 seconds with error 00:23:19.467 Verification LBA range: start 0x0 length 0x400 00:23:19.467 Nvme7n1 : 1.00 191.16 11.95 63.72 0.00 220106.03 16930.13 221074.77 00:23:19.467 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.467 Job: Nvme8n1 ended in about 1.01 seconds with error 00:23:19.467 Verification LBA range: start 0x0 length 0x400 00:23:19.467 Nvme8n1 : 1.01 190.69 11.92 63.56 0.00 216008.11 34297.17 230686.72 00:23:19.467 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.467 Job: Nvme9n1 ended in about 1.01 seconds with error 00:23:19.467 Verification LBA range: start 0x0 length 0x400 00:23:19.467 Nvme9n1 : 1.01 126.82 7.93 63.41 0.00 282604.66 19988.48 270882.13 00:23:19.467 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:19.467 Job: Nvme10n1 ended in about 1.01 seconds with error 00:23:19.467 Verification LBA range: start 0x0 length 0x400 00:23:19.467 Nvme10n1 : 1.01 126.50 7.91 63.25 0.00 277278.15 17476.27 267386.88 00:23:19.467 [2024-11-20T06:23:54.234Z] =================================================================================================================== 00:23:19.467 [2024-11-20T06:23:54.234Z] Total : 1796.64 112.29 641.10 0.00 237584.07 12615.68 270882.13 00:23:19.467 [2024-11-20 07:23:54.186728] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:19.467 [2024-11-20 07:23:54.186777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:19.467 [2024-11-20 07:23:54.187245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.467 [2024-11-20 07:23:54.187267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x246bb00 with addr=10.0.0.2, port=4420 00:23:19.467 [2024-11-20 07:23:54.187278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246bb00 is same with the state(6) to be set 00:23:19.467 [2024-11-20 07:23:54.187597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.467 [2024-11-20 07:23:54.187609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2481aa0 with addr=10.0.0.2, port=4420 00:23:19.467 [2024-11-20 07:23:54.187622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2481aa0 is same with the state(6) to be set 00:23:19.467 [2024-11-20 07:23:54.187827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.467 [2024-11-20 07:23:54.187839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28ad540 with addr=10.0.0.2, port=4420 00:23:19.467 [2024-11-20 07:23:54.187846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28ad540 is same with the state(6) to be set 00:23:19.467 [2024-11-20 07:23:54.188153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.467 [2024-11-20 07:23:54.188164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28db2a0 with addr=10.0.0.2, port=4420 00:23:19.467 [2024-11-20 07:23:54.188172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28db2a0 is same with the state(6) to be set 00:23:19.467 [2024-11-20 07:23:54.188199] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:19.467 [2024-11-20 07:23:54.188214] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:19.467 [2024-11-20 07:23:54.188225] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:19.467 [2024-11-20 07:23:54.188245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28db2a0 (9): Bad file descriptor 00:23:19.467 [2024-11-20 07:23:54.188261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28ad540 (9): Bad file descriptor 00:23:19.467 [2024-11-20 07:23:54.188274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2481aa0 (9): Bad file descriptor 00:23:19.467 [2024-11-20 07:23:54.188286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246bb00 (9): Bad file descriptor 00:23:19.467 1796.64 IOPS, 112.29 MiB/s [2024-11-20T06:23:54.234Z] [2024-11-20 07:23:54.190173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:19.467 [2024-11-20 07:23:54.190188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:19.467 [2024-11-20 07:23:54.190560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.467 [2024-11-20 07:23:54.190576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28db480 with addr=10.0.0.2, port=4420 00:23:19.467 [2024-11-20 07:23:54.190584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28db480 is same with the state(6) to be set 00:23:19.467 [2024-11-20 07:23:54.190747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.467 [2024-11-20 07:23:54.190758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28dfc40 with addr=10.0.0.2, port=4420 00:23:19.467 [2024-11-20 07:23:54.190766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28dfc40 is same with the state(6) to be set 00:23:19.467 [2024-11-20 07:23:54.190982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.467 [2024-11-20 07:23:54.190993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28d0a40 with addr=10.0.0.2, port=4420 00:23:19.467 [2024-11-20 07:23:54.191001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d0a40 is same with the state(6) to be set 00:23:19.467 [2024-11-20 07:23:54.191027] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:19.467 [2024-11-20 07:23:54.191040] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:19.467 [2024-11-20 07:23:54.191054] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:19.467 [2024-11-20 07:23:54.191068] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:19.467 [2024-11-20 07:23:54.191080] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:19.467 [2024-11-20 07:23:54.191165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:19.467 [2024-11-20 07:23:54.191511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.467 [2024-11-20 07:23:54.191525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24783e0 with addr=10.0.0.2, port=4420 00:23:19.467 [2024-11-20 07:23:54.191533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24783e0 is same with the state(6) to be set 00:23:19.467 [2024-11-20 07:23:54.191898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.467 [2024-11-20 07:23:54.191910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28ac0b0 with addr=10.0.0.2, port=4420 00:23:19.467 [2024-11-20 07:23:54.191917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28ac0b0 is same with the state(6) to be set 00:23:19.467 [2024-11-20 07:23:54.191927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28db480 (9): Bad file descriptor 00:23:19.467 [2024-11-20 07:23:54.191937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28dfc40 (9): Bad file descriptor 00:23:19.467 [2024-11-20 07:23:54.191946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28d0a40 (9): Bad file descriptor 00:23:19.467 [2024-11-20 07:23:54.191956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:19.467 [2024-11-20 07:23:54.191963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:19.467 [2024-11-20 07:23:54.191972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:19.467 [2024-11-20 07:23:54.191981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:19.467 [2024-11-20 07:23:54.191989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:19.467 [2024-11-20 07:23:54.191995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:19.467 [2024-11-20 07:23:54.192002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:19.467 [2024-11-20 07:23:54.192008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:19.467 [2024-11-20 07:23:54.192016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:19.467 [2024-11-20 07:23:54.192022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:19.467 [2024-11-20 07:23:54.192029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:19.467 [2024-11-20 07:23:54.192035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:19.467 [2024-11-20 07:23:54.192043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:19.467 [2024-11-20 07:23:54.192050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:19.467 [2024-11-20 07:23:54.192057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:19.467 [2024-11-20 07:23:54.192063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:19.467 [2024-11-20 07:23:54.192733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.467 [2024-11-20 07:23:54.192749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28b00a0 with addr=10.0.0.2, port=4420 00:23:19.467 [2024-11-20 07:23:54.192757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28b00a0 is same with the state(6) to be set 00:23:19.467 [2024-11-20 07:23:54.192767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24783e0 (9): Bad file descriptor 00:23:19.467 [2024-11-20 07:23:54.192778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28ac0b0 (9): Bad file descriptor 00:23:19.467 [2024-11-20 07:23:54.192787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:19.467 [2024-11-20 07:23:54.192794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:19.467 [2024-11-20 07:23:54.192801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:19.467 [2024-11-20 07:23:54.192808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:19.467 [2024-11-20 07:23:54.192816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:19.467 [2024-11-20 07:23:54.192823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:19.467 [2024-11-20 07:23:54.192831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:19.467 [2024-11-20 07:23:54.192838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:19.467 [2024-11-20 07:23:54.192846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:19.467 [2024-11-20 07:23:54.192852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:19.467 [2024-11-20 07:23:54.192859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:19.467 [2024-11-20 07:23:54.192879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:19.467 [2024-11-20 07:23:54.192908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28b00a0 (9): Bad file descriptor 00:23:19.467 [2024-11-20 07:23:54.192918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:19.467 [2024-11-20 07:23:54.192925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:19.467 [2024-11-20 07:23:54.192933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:19.467 [2024-11-20 07:23:54.192939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:19.467 [2024-11-20 07:23:54.193097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:19.467 [2024-11-20 07:23:54.193107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:19.467 [2024-11-20 07:23:54.193114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:19.467 [2024-11-20 07:23:54.193120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:19.467 [2024-11-20 07:23:54.193152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:19.467 [2024-11-20 07:23:54.193161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:19.467 [2024-11-20 07:23:54.193168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:19.467 [2024-11-20 07:23:54.193178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:19.728 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1355456 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1355456 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1355456 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.668 rmmod nvme_tcp 00:23:20.668 rmmod nvme_fabrics 00:23:20.668 rmmod nvme_keyring 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1355072 ']' 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1355072 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1355072 ']' 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1355072 00:23:20.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1355072) - No such process 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1355072 is not found' 00:23:20.668 Process with pid 1355072 is not found 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:20.668 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:20.929 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:20.929 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:20.929 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:20.929 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.929 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:20.929 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.929 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.929 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:22.839 00:23:22.839 real 0m7.865s 00:23:22.839 user 0m19.424s 00:23:22.839 sys 0m1.275s 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:22.839 ************************************ 00:23:22.839 END TEST nvmf_shutdown_tc3 00:23:22.839 ************************************ 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:22.839 ************************************ 00:23:22.839 START TEST nvmf_shutdown_tc4 00:23:22.839 ************************************ 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:22.839 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:23.100 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:23.101 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:23.101 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:23.101 Found net devices under 0000:31:00.0: cvl_0_0 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:23.101 Found net devices under 0000:31:00.1: cvl_0_1 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:23.101 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:23.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:23:23.363 00:23:23.363 --- 10.0.0.2 ping statistics --- 00:23:23.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.363 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:23:23.363 00:23:23.363 --- 10.0.0.1 ping statistics --- 00:23:23.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.363 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:23.363 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:23.363 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:23.363 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:23.363 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:23.363 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:23.363 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1356813 00:23:23.363 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1356813 00:23:23.363 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 1356813 ']' 00:23:23.363 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:23.363 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.363 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:23.363 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.363 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:23.363 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:23.363 [2024-11-20 07:23:58.094506] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:23:23.363 [2024-11-20 07:23:58.094575] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.624 [2024-11-20 07:23:58.201225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:23.624 [2024-11-20 07:23:58.240956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.624 [2024-11-20 07:23:58.240999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.624 [2024-11-20 07:23:58.241005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.624 [2024-11-20 07:23:58.241011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.624 [2024-11-20 07:23:58.241016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.624 [2024-11-20 07:23:58.242482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.624 [2024-11-20 07:23:58.242653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.624 [2024-11-20 07:23:58.242814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.624 [2024-11-20 07:23:58.242816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:24.194 [2024-11-20 07:23:58.933416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.194 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:24.454 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.454 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:24.454 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.454 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:24.454 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.454 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:24.454 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.454 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:24.454 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.454 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:24.454 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.454 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:24.455 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.455 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:24.455 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.455 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:24.455 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:24.455 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.455 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:24.455 Malloc1 00:23:24.455 [2024-11-20 07:23:59.041798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.455 Malloc2 00:23:24.455 Malloc3 00:23:24.455 Malloc4 00:23:24.455 Malloc5 00:23:24.455 Malloc6 00:23:24.715 Malloc7 00:23:24.715 Malloc8 00:23:24.715 Malloc9 00:23:24.715 Malloc10 00:23:24.715 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.715 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:24.715 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:24.715 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:24.715 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1357032 00:23:24.715 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:24.715 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:24.975 [2024-11-20 07:23:59.499021] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:30.265 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:30.265 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1356813 00:23:30.265 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1356813 ']' 00:23:30.265 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1356813 00:23:30.265 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:23:30.265 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:30.265 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1356813 00:23:30.265 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:30.265 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:30.265 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1356813' 00:23:30.265 killing process with pid 1356813 00:23:30.265 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 1356813 00:23:30.265 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 1356813 00:23:30.265 [2024-11-20 07:24:04.517726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86f3c0 is same with tWrite completed with error (sct=0, sc=8) 00:23:30.265 he state(6) to be set 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 [2024-11-20 07:24:04.517774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86f3c0 is same with the state(6) to be set 00:23:30.265 [2024-11-20 07:24:04.517783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86f3c0 is same with the state(6) to be set 00:23:30.265 [2024-11-20 07:24:04.517788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86f3c0 is same with the state(6) to be set 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 [2024-11-20 07:24:04.517794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86f3c0 is same with the state(6) to be set 00:23:30.265 [2024-11-20 07:24:04.517800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86f3c0 is same with the state(6) to be set 00:23:30.265 [2024-11-20 07:24:04.517805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86f3c0 is same with the state(6) to be set 00:23:30.265 [2024-11-20 07:24:04.517810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86f3c0 is same with the state(6) to be set 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 [2024-11-20 07:24:04.517815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86f3c0 is same with the state(6) to be set 00:23:30.265 [2024-11-20 07:24:04.517820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86f3c0 is same with the state(6) to be set 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 [2024-11-20 07:24:04.518120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86e550 is same with the state(6) to be set 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 [2024-11-20 07:24:04.518152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86e550 is same with the state(6) to be set 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 [2024-11-20 07:24:04.518162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86e550 is same with the state(6) to be set 00:23:30.265 starting I/O failed: -6 00:23:30.265 [2024-11-20 07:24:04.518167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86e550 is same with the state(6) to be set 00:23:30.265 [2024-11-20 07:24:04.518175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86e550 is same with the state(6) to be set 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 [2024-11-20 07:24:04.518180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86e550 is same with the state(6) to be set 00:23:30.265 [2024-11-20 07:24:04.518185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86e550 is same with the state(6) to be set 00:23:30.265 [2024-11-20 07:24:04.518192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86e550 is same with the state(6) to be set 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 [2024-11-20 07:24:04.518552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 [2024-11-20 07:24:04.519434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.265 Write completed with error (sct=0, sc=8) 00:23:30.265 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 [2024-11-20 07:24:04.520088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858200 is same with tWrite completed with error (sct=0, sc=8) 00:23:30.266 he state(6) to be set 00:23:30.266 starting I/O failed: -6 00:23:30.266 [2024-11-20 07:24:04.520113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858200 is same with the state(6) to be set 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 [2024-11-20 07:24:04.520119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858200 is same with the state(6) to be set 00:23:30.266 [2024-11-20 07:24:04.520125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858200 is same with the state(6) to be set 00:23:30.266 [2024-11-20 07:24:04.520131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858200 is same with the state(6) to be set 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 [2024-11-20 07:24:04.520136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858200 is same with the state(6) to be set 00:23:30.266 starting I/O failed: -6 00:23:30.266 [2024-11-20 07:24:04.520142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858200 is same with the state(6) to be set 00:23:30.266 [2024-11-20 07:24:04.520147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858200 is same with tWrite completed with error (sct=0, sc=8) 00:23:30.266 he state(6) to be set 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 [2024-11-20 07:24:04.520319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8586f0 is same with tstarting I/O failed: -6 00:23:30.266 he state(6) to be set 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 [2024-11-20 07:24:04.520343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8586f0 is same with the state(6) to be set 00:23:30.266 starting I/O failed: -6 00:23:30.266 [2024-11-20 07:24:04.520351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8586f0 is same with the state(6) to be set 00:23:30.266 [2024-11-20 07:24:04.520356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8586f0 is same with tWrite completed with error (sct=0, sc=8) 00:23:30.266 he state(6) to be set 00:23:30.266 [2024-11-20 07:24:04.520362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8586f0 is same with the state(6) to be set 00:23:30.266 starting I/O failed: -6 00:23:30.266 [2024-11-20 07:24:04.520368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8586f0 is same with the state(6) to be set 00:23:30.266 [2024-11-20 07:24:04.520373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8586f0 is same with the state(6) to be set 00:23:30.266 [2024-11-20 07:24:04.520378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8586f0 is same with the state(6) to be set 00:23:30.266 [2024-11-20 07:24:04.520384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 [2024-11-20 07:24:04.520596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858bc0 is same with tstarting I/O failed: -6 00:23:30.266 he state(6) to be set 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 [2024-11-20 07:24:04.520614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858bc0 is same with the state(6) to be set 00:23:30.266 [2024-11-20 07:24:04.520619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858bc0 is same with tstarting I/O failed: -6 00:23:30.266 he state(6) to be set 00:23:30.266 [2024-11-20 07:24:04.520626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858bc0 is same with the state(6) to be set 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 [2024-11-20 07:24:04.520632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858bc0 is same with the state(6) to be set 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 [2024-11-20 07:24:04.520872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857d30 is same with the state(6) to be set 00:23:30.266 starting I/O failed: -6 00:23:30.266 [2024-11-20 07:24:04.520888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857d30 is same with the state(6) to be set 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 [2024-11-20 07:24:04.520894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857d30 is same with the state(6) to be set 00:23:30.266 starting I/O failed: -6 00:23:30.266 [2024-11-20 07:24:04.520899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857d30 is same with the state(6) to be set 00:23:30.266 [2024-11-20 07:24:04.520905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857d30 is same with the state(6) to be set 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 [2024-11-20 07:24:04.520910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857d30 is same with the state(6) to be set 00:23:30.266 starting I/O failed: -6 00:23:30.266 [2024-11-20 07:24:04.520916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857d30 is same with the state(6) to be set 00:23:30.266 [2024-11-20 07:24:04.520921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x857d30 is same with the state(6) to be set 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.266 starting I/O failed: -6 00:23:30.266 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 [2024-11-20 07:24:04.521785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:30.267 NVMe io qpair process completion error 00:23:30.267 [2024-11-20 07:24:04.522113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85add0 is same with the state(6) to be set 00:23:30.267 [2024-11-20 07:24:04.522129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85add0 is same with the state(6) to be set 00:23:30.267 [2024-11-20 07:24:04.522151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85add0 is same with the state(6) to be set 00:23:30.267 [2024-11-20 07:24:04.522432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85a410 is same with the state(6) to be set 00:23:30.267 [2024-11-20 07:24:04.522449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85a410 is same with the state(6) to be set 00:23:30.267 [2024-11-20 07:24:04.522454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85a410 is same with the state(6) to be set 00:23:30.267 [2024-11-20 07:24:04.522460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85a410 is same with the state(6) to be set 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 [2024-11-20 07:24:04.523177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 [2024-11-20 07:24:04.523995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.267 starting I/O failed: -6 00:23:30.267 starting I/O failed: -6 00:23:30.267 starting I/O failed: -6 00:23:30.267 starting I/O failed: -6 00:23:30.267 starting I/O failed: -6 00:23:30.267 starting I/O failed: -6 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.267 starting I/O failed: -6 00:23:30.267 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 [2024-11-20 07:24:04.525151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 [2024-11-20 07:24:04.526716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:30.268 NVMe io qpair process completion error 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.268 starting I/O failed: -6 00:23:30.268 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 [2024-11-20 07:24:04.527876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.269 starting I/O failed: -6 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 [2024-11-20 07:24:04.528807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 [2024-11-20 07:24:04.529726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.269 starting I/O failed: -6 00:23:30.269 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 [2024-11-20 07:24:04.531179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:30.270 NVMe io qpair process completion error 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 [2024-11-20 07:24:04.532359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.270 Write completed with error (sct=0, sc=8) 00:23:30.270 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 [2024-11-20 07:24:04.533178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 [2024-11-20 07:24:04.534116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.271 starting I/O failed: -6 00:23:30.271 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 [2024-11-20 07:24:04.537648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:30.272 NVMe io qpair process completion error 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 [2024-11-20 07:24:04.538893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 [2024-11-20 07:24:04.539706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.272 starting I/O failed: -6 00:23:30.272 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 [2024-11-20 07:24:04.540637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 [2024-11-20 07:24:04.542293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:30.273 NVMe io qpair process completion error 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 [2024-11-20 07:24:04.543818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.273 starting I/O failed: -6 00:23:30.273 starting I/O failed: -6 00:23:30.273 starting I/O failed: -6 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.273 Write completed with error (sct=0, sc=8) 00:23:30.273 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 [2024-11-20 07:24:04.544821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 [2024-11-20 07:24:04.545731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.274 starting I/O failed: -6 00:23:30.274 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 [2024-11-20 07:24:04.548893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:30.275 NVMe io qpair process completion error 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 [2024-11-20 07:24:04.550156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:30.275 starting I/O failed: -6 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 [2024-11-20 07:24:04.551038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.275 Write completed with error (sct=0, sc=8) 00:23:30.275 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 [2024-11-20 07:24:04.551967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 [2024-11-20 07:24:04.553422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:30.276 NVMe io qpair process completion error 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 starting I/O failed: -6 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.276 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 [2024-11-20 07:24:04.554650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 [2024-11-20 07:24:04.555498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 [2024-11-20 07:24:04.556453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.277 starting I/O failed: -6 00:23:30.277 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 [2024-11-20 07:24:04.558984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:30.278 NVMe io qpair process completion error 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 [2024-11-20 07:24:04.560203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 [2024-11-20 07:24:04.561009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.278 starting I/O failed: -6 00:23:30.278 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 [2024-11-20 07:24:04.561941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 [2024-11-20 07:24:04.563791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:30.279 NVMe io qpair process completion error 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 starting I/O failed: -6 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.279 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 [2024-11-20 07:24:04.565012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:30.280 starting I/O failed: -6 00:23:30.280 starting I/O failed: -6 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 [2024-11-20 07:24:04.565995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.280 Write completed with error (sct=0, sc=8) 00:23:30.280 starting I/O failed: -6 00:23:30.281 [2024-11-20 07:24:04.566927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 Write completed with error (sct=0, sc=8) 00:23:30.281 starting I/O failed: -6 00:23:30.281 [2024-11-20 07:24:04.570386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:30.281 NVMe io qpair process completion error 00:23:30.281 Initializing NVMe Controllers 00:23:30.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:30.281 Controller IO queue size 128, less than required. 00:23:30.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:30.281 Controller IO queue size 128, less than required. 00:23:30.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:30.281 Controller IO queue size 128, less than required. 00:23:30.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:30.281 Controller IO queue size 128, less than required. 00:23:30.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:30.281 Controller IO queue size 128, less than required. 00:23:30.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:30.281 Controller IO queue size 128, less than required. 00:23:30.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:30.281 Controller IO queue size 128, less than required. 00:23:30.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:30.281 Controller IO queue size 128, less than required. 00:23:30.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:30.281 Controller IO queue size 128, less than required. 00:23:30.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:30.281 Controller IO queue size 128, less than required. 00:23:30.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:30.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:30.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:30.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:30.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:30.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:30.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:30.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:30.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:30.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:30.281 Initialization complete. Launching workers. 00:23:30.281 ======================================================== 00:23:30.281 Latency(us) 00:23:30.281 Device Information : IOPS MiB/s Average min max 00:23:30.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1881.78 80.86 68039.25 620.13 131301.43 00:23:30.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1873.86 80.52 68348.15 625.52 126739.38 00:23:30.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1874.07 80.53 68374.25 838.79 123312.01 00:23:30.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1855.10 79.71 69100.75 628.74 137055.22 00:23:30.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1846.76 79.35 68695.91 619.91 125682.64 00:23:30.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1850.93 79.53 68565.83 647.60 121359.23 00:23:30.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1858.23 79.85 68317.63 688.63 125090.46 00:23:30.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1872.61 80.46 67812.46 694.77 127551.85 00:23:30.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1866.57 80.20 68080.11 669.96 125844.59 00:23:30.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1852.39 79.59 68630.38 680.52 128033.95 00:23:30.282 ======================================================== 00:23:30.282 Total : 18632.30 800.61 68394.92 619.91 137055.22 00:23:30.282 00:23:30.282 [2024-11-20 07:24:04.575425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf836c0 is same with the state(6) to be set 00:23:30.282 [2024-11-20 07:24:04.575471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83060 is same with the state(6) to be set 00:23:30.282 [2024-11-20 07:24:04.575501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf84050 is same with the state(6) to be set 00:23:30.282 [2024-11-20 07:24:04.575531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf839f0 is same with the state(6) to be set 00:23:30.282 [2024-11-20 07:24:04.575560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf85360 is same with the state(6) to be set 00:23:30.282 [2024-11-20 07:24:04.575589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf84380 is same with the state(6) to be set 00:23:30.282 [2024-11-20 07:24:04.575621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83390 is same with the state(6) to be set 00:23:30.282 [2024-11-20 07:24:04.575649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf849e0 is same with the state(6) to be set 00:23:30.282 [2024-11-20 07:24:04.575676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf85540 is same with the state(6) to be set 00:23:30.282 [2024-11-20 07:24:04.575704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf846b0 is same with the state(6) to be set 00:23:30.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:30.282 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1357032 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1357032 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1357032 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:31.223 rmmod nvme_tcp 00:23:31.223 rmmod nvme_fabrics 00:23:31.223 rmmod nvme_keyring 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1356813 ']' 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1356813 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1356813 ']' 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1356813 00:23:31.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1356813) - No such process 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1356813 is not found' 00:23:31.223 Process with pid 1356813 is not found 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.223 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.768 07:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.768 00:23:33.768 real 0m10.333s 00:23:33.768 user 0m27.811s 00:23:33.768 sys 0m4.103s 00:23:33.768 07:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:33.768 07:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:33.768 ************************************ 00:23:33.768 END TEST nvmf_shutdown_tc4 00:23:33.768 ************************************ 00:23:33.768 07:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:33.768 00:23:33.768 real 0m44.440s 00:23:33.768 user 1m45.514s 00:23:33.768 sys 0m14.566s 00:23:33.768 07:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:33.768 07:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:33.768 ************************************ 00:23:33.768 END TEST nvmf_shutdown 00:23:33.768 ************************************ 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:33.768 ************************************ 00:23:33.768 START TEST nvmf_nsid 00:23:33.768 ************************************ 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:33.768 * Looking for test storage... 00:23:33.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:33.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.768 --rc genhtml_branch_coverage=1 00:23:33.768 --rc genhtml_function_coverage=1 00:23:33.768 --rc genhtml_legend=1 00:23:33.768 --rc geninfo_all_blocks=1 00:23:33.768 --rc geninfo_unexecuted_blocks=1 00:23:33.768 00:23:33.768 ' 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:33.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.768 --rc genhtml_branch_coverage=1 00:23:33.768 --rc genhtml_function_coverage=1 00:23:33.768 --rc genhtml_legend=1 00:23:33.768 --rc geninfo_all_blocks=1 00:23:33.768 --rc geninfo_unexecuted_blocks=1 00:23:33.768 00:23:33.768 ' 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:33.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.768 --rc genhtml_branch_coverage=1 00:23:33.768 --rc genhtml_function_coverage=1 00:23:33.768 --rc genhtml_legend=1 00:23:33.768 --rc geninfo_all_blocks=1 00:23:33.768 --rc geninfo_unexecuted_blocks=1 00:23:33.768 00:23:33.768 ' 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:33.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.768 --rc genhtml_branch_coverage=1 00:23:33.768 --rc genhtml_function_coverage=1 00:23:33.768 --rc genhtml_legend=1 00:23:33.768 --rc geninfo_all_blocks=1 00:23:33.768 --rc geninfo_unexecuted_blocks=1 00:23:33.768 00:23:33.768 ' 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.768 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.769 07:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:41.984 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:41.985 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:41.985 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:41.985 Found net devices under 0000:31:00.0: cvl_0_0 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:41.985 Found net devices under 0000:31:00.1: cvl_0_1 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:41.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:23:41.985 00:23:41.985 --- 10.0.0.2 ping statistics --- 00:23:41.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.985 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:23:41.985 00:23:41.985 --- 10.0.0.1 ping statistics --- 00:23:41.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.985 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1363566 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1363566 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1363566 ']' 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:41.985 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:42.246 [2024-11-20 07:24:16.792192] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:23:42.246 [2024-11-20 07:24:16.792259] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.246 [2024-11-20 07:24:16.886976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.246 [2024-11-20 07:24:16.926623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.246 [2024-11-20 07:24:16.926659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.246 [2024-11-20 07:24:16.926667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.246 [2024-11-20 07:24:16.926674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.246 [2024-11-20 07:24:16.926680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.246 [2024-11-20 07:24:16.927288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.187 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:43.187 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:43.187 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:43.187 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:43.187 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:43.187 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.187 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:43.187 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1363755 00:23:43.187 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=1ec75ba3-cc5b-4d44-a3db-b7a688cb8e52 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=f24c5e83-e733-43f6-bcde-9d56340dfed2 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=ff9ec7f3-2a1f-4b7c-91b4-0fa132b573b5 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:43.188 null0 00:23:43.188 null1 00:23:43.188 null2 00:23:43.188 [2024-11-20 07:24:17.687904] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:23:43.188 [2024-11-20 07:24:17.687955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1363755 ] 00:23:43.188 [2024-11-20 07:24:17.689643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.188 [2024-11-20 07:24:17.713822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1363755 /var/tmp/tgt2.sock 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1363755 ']' 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:43.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:43.188 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:43.188 [2024-11-20 07:24:17.783020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.188 [2024-11-20 07:24:17.819454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.448 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:43.448 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:43.448 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:43.708 [2024-11-20 07:24:18.300874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.708 [2024-11-20 07:24:18.317008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:43.708 nvme0n1 nvme0n2 00:23:43.708 nvme1n1 00:23:43.708 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:43.708 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:43.708 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:45.091 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:45.091 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:45.091 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:45.091 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:45.091 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:23:45.091 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:45.091 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:45.091 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:45.091 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:45.091 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:45.091 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:23:45.091 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:23:45.091 07:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 1ec75ba3-cc5b-4d44-a3db-b7a688cb8e52 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1ec75ba3cc5b4d44a3dbb7a688cb8e52 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1EC75BA3CC5B4D44A3DBB7A688CB8E52 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 1EC75BA3CC5B4D44A3DBB7A688CB8E52 == \1\E\C\7\5\B\A\3\C\C\5\B\4\D\4\4\A\3\D\B\B\7\A\6\8\8\C\B\8\E\5\2 ]] 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid f24c5e83-e733-43f6-bcde-9d56340dfed2 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f24c5e83e73343f6bcde9d56340dfed2 00:23:46.476 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F24C5E83E73343F6BCDE9D56340DFED2 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ F24C5E83E73343F6BCDE9D56340DFED2 == \F\2\4\C\5\E\8\3\E\7\3\3\4\3\F\6\B\C\D\E\9\D\5\6\3\4\0\D\F\E\D\2 ]] 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid ff9ec7f3-2a1f-4b7c-91b4-0fa132b573b5 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:46.477 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:46.477 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ff9ec7f32a1f4b7c91b40fa132b573b5 00:23:46.477 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FF9EC7F32A1F4B7C91B40FA132B573B5 00:23:46.477 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ FF9EC7F32A1F4B7C91B40FA132B573B5 == \F\F\9\E\C\7\F\3\2\A\1\F\4\B\7\C\9\1\B\4\0\F\A\1\3\2\B\5\7\3\B\5 ]] 00:23:46.477 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:46.738 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:46.738 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:46.738 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1363755 00:23:46.738 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1363755 ']' 00:23:46.738 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1363755 00:23:46.738 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:46.738 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:46.738 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1363755 00:23:46.738 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:46.738 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:46.738 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1363755' 00:23:46.738 killing process with pid 1363755 00:23:46.738 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1363755 00:23:46.738 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1363755 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:46.999 rmmod nvme_tcp 00:23:46.999 rmmod nvme_fabrics 00:23:46.999 rmmod nvme_keyring 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1363566 ']' 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1363566 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1363566 ']' 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1363566 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1363566 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1363566' 00:23:46.999 killing process with pid 1363566 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1363566 00:23:46.999 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1363566 00:23:47.260 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:47.260 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:47.260 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:47.260 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:47.260 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:47.260 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:47.260 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:47.260 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:47.260 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:47.260 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.260 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.260 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.170 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:49.170 00:23:49.170 real 0m15.808s 00:23:49.170 user 0m11.527s 00:23:49.170 sys 0m7.486s 00:23:49.170 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:49.170 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:49.170 ************************************ 00:23:49.170 END TEST nvmf_nsid 00:23:49.170 ************************************ 00:23:49.170 07:24:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:49.170 00:23:49.170 real 13m30.281s 00:23:49.170 user 27m36.570s 00:23:49.170 sys 4m8.238s 00:23:49.170 07:24:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:49.170 07:24:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:49.170 ************************************ 00:23:49.170 END TEST nvmf_target_extra 00:23:49.170 ************************************ 00:23:49.431 07:24:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:49.431 07:24:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:49.431 07:24:23 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:49.431 07:24:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.431 ************************************ 00:23:49.431 START TEST nvmf_host 00:23:49.431 ************************************ 00:23:49.431 07:24:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:49.431 * Looking for test storage... 00:23:49.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:49.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.431 --rc genhtml_branch_coverage=1 00:23:49.431 --rc genhtml_function_coverage=1 00:23:49.431 --rc genhtml_legend=1 00:23:49.431 --rc geninfo_all_blocks=1 00:23:49.431 --rc geninfo_unexecuted_blocks=1 00:23:49.431 00:23:49.431 ' 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:49.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.431 --rc genhtml_branch_coverage=1 00:23:49.431 --rc genhtml_function_coverage=1 00:23:49.431 --rc genhtml_legend=1 00:23:49.431 --rc geninfo_all_blocks=1 00:23:49.431 --rc geninfo_unexecuted_blocks=1 00:23:49.431 00:23:49.431 ' 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:49.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.431 --rc genhtml_branch_coverage=1 00:23:49.431 --rc genhtml_function_coverage=1 00:23:49.431 --rc genhtml_legend=1 00:23:49.431 --rc geninfo_all_blocks=1 00:23:49.431 --rc geninfo_unexecuted_blocks=1 00:23:49.431 00:23:49.431 ' 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:49.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.431 --rc genhtml_branch_coverage=1 00:23:49.431 --rc genhtml_function_coverage=1 00:23:49.431 --rc genhtml_legend=1 00:23:49.431 --rc geninfo_all_blocks=1 00:23:49.431 --rc geninfo_unexecuted_blocks=1 00:23:49.431 00:23:49.431 ' 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.431 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.693 07:24:24 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:49.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.694 ************************************ 00:23:49.694 START TEST nvmf_multicontroller 00:23:49.694 ************************************ 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:49.694 * Looking for test storage... 00:23:49.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:49.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.694 --rc genhtml_branch_coverage=1 00:23:49.694 --rc genhtml_function_coverage=1 00:23:49.694 --rc genhtml_legend=1 00:23:49.694 --rc geninfo_all_blocks=1 00:23:49.694 --rc geninfo_unexecuted_blocks=1 00:23:49.694 00:23:49.694 ' 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:49.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.694 --rc genhtml_branch_coverage=1 00:23:49.694 --rc genhtml_function_coverage=1 00:23:49.694 --rc genhtml_legend=1 00:23:49.694 --rc geninfo_all_blocks=1 00:23:49.694 --rc geninfo_unexecuted_blocks=1 00:23:49.694 00:23:49.694 ' 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:49.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.694 --rc genhtml_branch_coverage=1 00:23:49.694 --rc genhtml_function_coverage=1 00:23:49.694 --rc genhtml_legend=1 00:23:49.694 --rc geninfo_all_blocks=1 00:23:49.694 --rc geninfo_unexecuted_blocks=1 00:23:49.694 00:23:49.694 ' 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:49.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.694 --rc genhtml_branch_coverage=1 00:23:49.694 --rc genhtml_function_coverage=1 00:23:49.694 --rc genhtml_legend=1 00:23:49.694 --rc geninfo_all_blocks=1 00:23:49.694 --rc geninfo_unexecuted_blocks=1 00:23:49.694 00:23:49.694 ' 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.694 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:49.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:49.957 07:24:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:58.100 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:58.100 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:58.100 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:58.101 Found net devices under 0000:31:00.0: cvl_0_0 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:58.101 Found net devices under 0000:31:00.1: cvl_0_1 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:58.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:23:58.101 00:23:58.101 --- 10.0.0.2 ping statistics --- 00:23:58.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.101 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:23:58.101 00:23:58.101 --- 10.0.0.1 ping statistics --- 00:23:58.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.101 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1369393 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1369393 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 1369393 ']' 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:58.101 07:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.363 [2024-11-20 07:24:32.892297] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:23:58.363 [2024-11-20 07:24:32.892352] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.363 [2024-11-20 07:24:32.970493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:58.363 [2024-11-20 07:24:33.016679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.363 [2024-11-20 07:24:33.016732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.363 [2024-11-20 07:24:33.016739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.363 [2024-11-20 07:24:33.016744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.363 [2024-11-20 07:24:33.016749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.363 [2024-11-20 07:24:33.018414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.363 [2024-11-20 07:24:33.018580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.363 [2024-11-20 07:24:33.018580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:58.363 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:58.363 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:58.363 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:58.363 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:58.363 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 [2024-11-20 07:24:33.176660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 Malloc0 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 [2024-11-20 07:24:33.245502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 [2024-11-20 07:24:33.257438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 Malloc1 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1369435 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1369435 /var/tmp/bdevperf.sock 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 1369435 ']' 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:58.625 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.626 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:58.626 07:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.569 NVMe0n1 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.569 1 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.569 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.832 request: 00:23:59.832 { 00:23:59.832 "name": "NVMe0", 00:23:59.832 "trtype": "tcp", 00:23:59.832 "traddr": "10.0.0.2", 00:23:59.832 "adrfam": "ipv4", 00:23:59.832 "trsvcid": "4420", 00:23:59.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.832 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:59.832 "hostaddr": "10.0.0.1", 00:23:59.832 "prchk_reftag": false, 00:23:59.832 "prchk_guard": false, 00:23:59.832 "hdgst": false, 00:23:59.832 "ddgst": false, 00:23:59.832 "allow_unrecognized_csi": false, 00:23:59.832 "method": "bdev_nvme_attach_controller", 00:23:59.832 "req_id": 1 00:23:59.832 } 00:23:59.832 Got JSON-RPC error response 00:23:59.832 response: 00:23:59.832 { 00:23:59.832 "code": -114, 00:23:59.832 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:59.832 } 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.832 request: 00:23:59.832 { 00:23:59.832 "name": "NVMe0", 00:23:59.832 "trtype": "tcp", 00:23:59.832 "traddr": "10.0.0.2", 00:23:59.832 "adrfam": "ipv4", 00:23:59.832 "trsvcid": "4420", 00:23:59.832 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:59.832 "hostaddr": "10.0.0.1", 00:23:59.832 "prchk_reftag": false, 00:23:59.832 "prchk_guard": false, 00:23:59.832 "hdgst": false, 00:23:59.832 "ddgst": false, 00:23:59.832 "allow_unrecognized_csi": false, 00:23:59.832 "method": "bdev_nvme_attach_controller", 00:23:59.832 "req_id": 1 00:23:59.832 } 00:23:59.832 Got JSON-RPC error response 00:23:59.832 response: 00:23:59.832 { 00:23:59.832 "code": -114, 00:23:59.832 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:59.832 } 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.832 request: 00:23:59.832 { 00:23:59.832 "name": "NVMe0", 00:23:59.832 "trtype": "tcp", 00:23:59.832 "traddr": "10.0.0.2", 00:23:59.832 "adrfam": "ipv4", 00:23:59.832 "trsvcid": "4420", 00:23:59.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.832 "hostaddr": "10.0.0.1", 00:23:59.832 "prchk_reftag": false, 00:23:59.832 "prchk_guard": false, 00:23:59.832 "hdgst": false, 00:23:59.832 "ddgst": false, 00:23:59.832 "multipath": "disable", 00:23:59.832 "allow_unrecognized_csi": false, 00:23:59.832 "method": "bdev_nvme_attach_controller", 00:23:59.832 "req_id": 1 00:23:59.832 } 00:23:59.832 Got JSON-RPC error response 00:23:59.832 response: 00:23:59.832 { 00:23:59.832 "code": -114, 00:23:59.832 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:59.832 } 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:59.832 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.833 request: 00:23:59.833 { 00:23:59.833 "name": "NVMe0", 00:23:59.833 "trtype": "tcp", 00:23:59.833 "traddr": "10.0.0.2", 00:23:59.833 "adrfam": "ipv4", 00:23:59.833 "trsvcid": "4420", 00:23:59.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.833 "hostaddr": "10.0.0.1", 00:23:59.833 "prchk_reftag": false, 00:23:59.833 "prchk_guard": false, 00:23:59.833 "hdgst": false, 00:23:59.833 "ddgst": false, 00:23:59.833 "multipath": "failover", 00:23:59.833 "allow_unrecognized_csi": false, 00:23:59.833 "method": "bdev_nvme_attach_controller", 00:23:59.833 "req_id": 1 00:23:59.833 } 00:23:59.833 Got JSON-RPC error response 00:23:59.833 response: 00:23:59.833 { 00:23:59.833 "code": -114, 00:23:59.833 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:59.833 } 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.833 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.093 NVMe0n1 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.093 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:00.093 07:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:01.479 { 00:24:01.479 "results": [ 00:24:01.479 { 00:24:01.479 "job": "NVMe0n1", 00:24:01.479 "core_mask": "0x1", 00:24:01.479 "workload": "write", 00:24:01.479 "status": "finished", 00:24:01.479 "queue_depth": 128, 00:24:01.479 "io_size": 4096, 00:24:01.479 "runtime": 1.008871, 00:24:01.479 "iops": 19964.891447965103, 00:24:01.479 "mibps": 77.98785721861368, 00:24:01.479 "io_failed": 0, 00:24:01.479 "io_timeout": 0, 00:24:01.479 "avg_latency_us": 6401.145234170721, 00:24:01.479 "min_latency_us": 4041.3866666666668, 00:24:01.479 "max_latency_us": 13871.786666666667 00:24:01.479 } 00:24:01.479 ], 00:24:01.479 "core_count": 1 00:24:01.479 } 00:24:01.479 07:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:01.479 07:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.479 07:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:01.479 07:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.479 07:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:01.479 07:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1369435 00:24:01.479 07:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 1369435 ']' 00:24:01.479 07:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 1369435 00:24:01.479 07:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:24:01.479 07:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:01.479 07:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1369435 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1369435' 00:24:01.479 killing process with pid 1369435 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 1369435 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 1369435 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:24:01.479 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:24:01.479 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:01.479 [2024-11-20 07:24:33.375307] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:24:01.479 [2024-11-20 07:24:33.375363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369435 ] 00:24:01.479 [2024-11-20 07:24:33.454542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.479 [2024-11-20 07:24:33.490734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.479 [2024-11-20 07:24:34.766795] bdev.c:4753:bdev_name_add: *ERROR*: Bdev name f967db3c-1be4-47be-b5fa-20b71a75b803 already exists 00:24:01.479 [2024-11-20 07:24:34.766826] bdev.c:7962:bdev_register: *ERROR*: Unable to add uuid:f967db3c-1be4-47be-b5fa-20b71a75b803 alias for bdev NVMe1n1 00:24:01.479 [2024-11-20 07:24:34.766835] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:01.479 Running I/O for 1 seconds... 00:24:01.479 19949.00 IOPS, 77.93 MiB/s 00:24:01.479 Latency(us) 00:24:01.479 [2024-11-20T06:24:36.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.479 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:01.479 NVMe0n1 : 1.01 19964.89 77.99 0.00 0.00 6401.15 4041.39 13871.79 00:24:01.480 [2024-11-20T06:24:36.247Z] =================================================================================================================== 00:24:01.480 [2024-11-20T06:24:36.247Z] Total : 19964.89 77.99 0.00 0.00 6401.15 4041.39 13871.79 00:24:01.480 Received shutdown signal, test time was about 1.000000 seconds 00:24:01.480 00:24:01.480 Latency(us) 00:24:01.480 [2024-11-20T06:24:36.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.480 [2024-11-20T06:24:36.247Z] =================================================================================================================== 00:24:01.480 [2024-11-20T06:24:36.247Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.480 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:01.480 rmmod nvme_tcp 00:24:01.480 rmmod nvme_fabrics 00:24:01.480 rmmod nvme_keyring 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1369393 ']' 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1369393 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 1369393 ']' 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 1369393 00:24:01.480 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1369393 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1369393' 00:24:01.742 killing process with pid 1369393 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 1369393 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 1369393 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.742 07:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.288 07:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:04.288 00:24:04.288 real 0m14.291s 00:24:04.288 user 0m15.770s 00:24:04.288 sys 0m6.958s 00:24:04.288 07:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:04.288 07:24:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.288 ************************************ 00:24:04.288 END TEST nvmf_multicontroller 00:24:04.288 ************************************ 00:24:04.288 07:24:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:04.288 07:24:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:04.288 07:24:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:04.288 07:24:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.288 ************************************ 00:24:04.288 START TEST nvmf_aer 00:24:04.288 ************************************ 00:24:04.288 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:04.288 * Looking for test storage... 00:24:04.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.288 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:04.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.289 --rc genhtml_branch_coverage=1 00:24:04.289 --rc genhtml_function_coverage=1 00:24:04.289 --rc genhtml_legend=1 00:24:04.289 --rc geninfo_all_blocks=1 00:24:04.289 --rc geninfo_unexecuted_blocks=1 00:24:04.289 00:24:04.289 ' 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:04.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.289 --rc genhtml_branch_coverage=1 00:24:04.289 --rc genhtml_function_coverage=1 00:24:04.289 --rc genhtml_legend=1 00:24:04.289 --rc geninfo_all_blocks=1 00:24:04.289 --rc geninfo_unexecuted_blocks=1 00:24:04.289 00:24:04.289 ' 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:04.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.289 --rc genhtml_branch_coverage=1 00:24:04.289 --rc genhtml_function_coverage=1 00:24:04.289 --rc genhtml_legend=1 00:24:04.289 --rc geninfo_all_blocks=1 00:24:04.289 --rc geninfo_unexecuted_blocks=1 00:24:04.289 00:24:04.289 ' 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:04.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.289 --rc genhtml_branch_coverage=1 00:24:04.289 --rc genhtml_function_coverage=1 00:24:04.289 --rc genhtml_legend=1 00:24:04.289 --rc geninfo_all_blocks=1 00:24:04.289 --rc geninfo_unexecuted_blocks=1 00:24:04.289 00:24:04.289 ' 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:04.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:04.289 07:24:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.437 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:12.438 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:12.438 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:12.438 Found net devices under 0000:31:00.0: cvl_0_0 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:12.438 Found net devices under 0000:31:00.1: cvl_0_1 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.438 07:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:12.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:24:12.438 00:24:12.438 --- 10.0.0.2 ping statistics --- 00:24:12.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.438 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:24:12.438 00:24:12.438 --- 10.0.0.1 ping statistics --- 00:24:12.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.438 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.438 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.699 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:12.699 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.699 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:12.699 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.699 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1374785 00:24:12.699 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1374785 00:24:12.699 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:12.699 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 1374785 ']' 00:24:12.699 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.699 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:12.699 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.699 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:12.699 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.699 [2024-11-20 07:24:47.279801] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:24:12.699 [2024-11-20 07:24:47.279854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.699 [2024-11-20 07:24:47.369092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:12.699 [2024-11-20 07:24:47.405343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.699 [2024-11-20 07:24:47.405377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.700 [2024-11-20 07:24:47.405386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.700 [2024-11-20 07:24:47.405392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.700 [2024-11-20 07:24:47.405399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.700 [2024-11-20 07:24:47.407103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.700 [2024-11-20 07:24:47.407190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.700 [2024-11-20 07:24:47.407345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.700 [2024-11-20 07:24:47.407345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.960 [2024-11-20 07:24:47.548508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.960 Malloc0 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.960 [2024-11-20 07:24:47.623239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.960 [ 00:24:12.960 { 00:24:12.960 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:12.960 "subtype": "Discovery", 00:24:12.960 "listen_addresses": [], 00:24:12.960 "allow_any_host": true, 00:24:12.960 "hosts": [] 00:24:12.960 }, 00:24:12.960 { 00:24:12.960 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.960 "subtype": "NVMe", 00:24:12.960 "listen_addresses": [ 00:24:12.960 { 00:24:12.960 "trtype": "TCP", 00:24:12.960 "adrfam": "IPv4", 00:24:12.960 "traddr": "10.0.0.2", 00:24:12.960 "trsvcid": "4420" 00:24:12.960 } 00:24:12.960 ], 00:24:12.960 "allow_any_host": true, 00:24:12.960 "hosts": [], 00:24:12.960 "serial_number": "SPDK00000000000001", 00:24:12.960 "model_number": "SPDK bdev Controller", 00:24:12.960 "max_namespaces": 2, 00:24:12.960 "min_cntlid": 1, 00:24:12.960 "max_cntlid": 65519, 00:24:12.960 "namespaces": [ 00:24:12.960 { 00:24:12.960 "nsid": 1, 00:24:12.960 "bdev_name": "Malloc0", 00:24:12.960 "name": "Malloc0", 00:24:12.960 "nguid": "2DCD8C8C6C2844EFA4528A7A160EDF39", 00:24:12.960 "uuid": "2dcd8c8c-6c28-44ef-a452-8a7a160edf39" 00:24:12.960 } 00:24:12.960 ] 00:24:12.960 } 00:24:12.960 ] 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1374820 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:24:12.960 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.221 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.482 Malloc1 00:24:13.482 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.482 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:13.482 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.482 07:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.482 Asynchronous Event Request test 00:24:13.482 Attaching to 10.0.0.2 00:24:13.482 Attached to 10.0.0.2 00:24:13.482 Registering asynchronous event callbacks... 00:24:13.482 Starting namespace attribute notice tests for all controllers... 00:24:13.482 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:13.482 aer_cb - Changed Namespace 00:24:13.482 Cleaning up... 00:24:13.482 [ 00:24:13.482 { 00:24:13.482 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:13.482 "subtype": "Discovery", 00:24:13.482 "listen_addresses": [], 00:24:13.482 "allow_any_host": true, 00:24:13.482 "hosts": [] 00:24:13.482 }, 00:24:13.482 { 00:24:13.482 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.482 "subtype": "NVMe", 00:24:13.482 "listen_addresses": [ 00:24:13.482 { 00:24:13.482 "trtype": "TCP", 00:24:13.482 "adrfam": "IPv4", 00:24:13.482 "traddr": "10.0.0.2", 00:24:13.482 "trsvcid": "4420" 00:24:13.482 } 00:24:13.482 ], 00:24:13.482 "allow_any_host": true, 00:24:13.482 "hosts": [], 00:24:13.482 "serial_number": "SPDK00000000000001", 00:24:13.482 "model_number": "SPDK bdev Controller", 00:24:13.482 "max_namespaces": 2, 00:24:13.482 "min_cntlid": 1, 00:24:13.482 "max_cntlid": 65519, 00:24:13.482 "namespaces": [ 00:24:13.482 { 00:24:13.482 "nsid": 1, 00:24:13.482 "bdev_name": "Malloc0", 00:24:13.482 "name": "Malloc0", 00:24:13.482 "nguid": "2DCD8C8C6C2844EFA4528A7A160EDF39", 00:24:13.482 "uuid": "2dcd8c8c-6c28-44ef-a452-8a7a160edf39" 00:24:13.482 }, 00:24:13.482 { 00:24:13.482 "nsid": 2, 00:24:13.482 "bdev_name": "Malloc1", 00:24:13.482 "name": "Malloc1", 00:24:13.482 "nguid": "79B6C2B0CDC04F44B55707D22064F85B", 00:24:13.482 "uuid": "79b6c2b0-cdc0-4f44-b557-07d22064f85b" 00:24:13.482 } 00:24:13.482 ] 00:24:13.482 } 00:24:13.482 ] 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1374820 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.482 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:13.483 rmmod nvme_tcp 00:24:13.483 rmmod nvme_fabrics 00:24:13.483 rmmod nvme_keyring 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1374785 ']' 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1374785 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 1374785 ']' 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 1374785 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1374785 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1374785' 00:24:13.483 killing process with pid 1374785 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 1374785 00:24:13.483 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 1374785 00:24:13.744 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:13.744 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:13.744 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:13.744 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:13.744 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:13.744 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:13.744 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:13.745 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:13.745 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:13.745 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.745 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.745 07:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.290 00:24:16.290 real 0m11.812s 00:24:16.290 user 0m6.383s 00:24:16.290 sys 0m6.531s 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.290 ************************************ 00:24:16.290 END TEST nvmf_aer 00:24:16.290 ************************************ 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.290 ************************************ 00:24:16.290 START TEST nvmf_async_init 00:24:16.290 ************************************ 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:16.290 * Looking for test storage... 00:24:16.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:16.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.290 --rc genhtml_branch_coverage=1 00:24:16.290 --rc genhtml_function_coverage=1 00:24:16.290 --rc genhtml_legend=1 00:24:16.290 --rc geninfo_all_blocks=1 00:24:16.290 --rc geninfo_unexecuted_blocks=1 00:24:16.290 00:24:16.290 ' 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:16.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.290 --rc genhtml_branch_coverage=1 00:24:16.290 --rc genhtml_function_coverage=1 00:24:16.290 --rc genhtml_legend=1 00:24:16.290 --rc geninfo_all_blocks=1 00:24:16.290 --rc geninfo_unexecuted_blocks=1 00:24:16.290 00:24:16.290 ' 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:16.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.290 --rc genhtml_branch_coverage=1 00:24:16.290 --rc genhtml_function_coverage=1 00:24:16.290 --rc genhtml_legend=1 00:24:16.290 --rc geninfo_all_blocks=1 00:24:16.290 --rc geninfo_unexecuted_blocks=1 00:24:16.290 00:24:16.290 ' 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:16.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.290 --rc genhtml_branch_coverage=1 00:24:16.290 --rc genhtml_function_coverage=1 00:24:16.290 --rc genhtml_legend=1 00:24:16.290 --rc geninfo_all_blocks=1 00:24:16.290 --rc geninfo_unexecuted_blocks=1 00:24:16.290 00:24:16.290 ' 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.290 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=044fcf2c7aed4fa0987b71d97c0abcc6 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:16.291 07:24:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.438 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:24.439 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:24.439 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:24.439 Found net devices under 0000:31:00.0: cvl_0_0 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:24.439 Found net devices under 0000:31:00.1: cvl_0_1 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.439 07:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.439 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.439 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.439 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:24.439 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.439 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.439 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.439 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:24.439 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:24.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:24:24.700 00:24:24.700 --- 10.0.0.2 ping statistics --- 00:24:24.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.700 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:24:24.700 00:24:24.700 --- 10.0.0.1 ping statistics --- 00:24:24.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.700 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1379808 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1379808 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 1379808 ']' 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:24.700 07:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:24.700 [2024-11-20 07:24:59.322936] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:24:24.700 [2024-11-20 07:24:59.323004] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.700 [2024-11-20 07:24:59.413467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.700 [2024-11-20 07:24:59.453755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.700 [2024-11-20 07:24:59.453790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.700 [2024-11-20 07:24:59.453798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.700 [2024-11-20 07:24:59.453805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.700 [2024-11-20 07:24:59.453811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.700 [2024-11-20 07:24:59.454398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.643 [2024-11-20 07:25:00.165601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.643 null0 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 044fcf2c7aed4fa0987b71d97c0abcc6 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.643 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.644 [2024-11-20 07:25:00.225907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.644 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.644 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:25.644 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.644 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.904 nvme0n1 00:24:25.904 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.905 [ 00:24:25.905 { 00:24:25.905 "name": "nvme0n1", 00:24:25.905 "aliases": [ 00:24:25.905 "044fcf2c-7aed-4fa0-987b-71d97c0abcc6" 00:24:25.905 ], 00:24:25.905 "product_name": "NVMe disk", 00:24:25.905 "block_size": 512, 00:24:25.905 "num_blocks": 2097152, 00:24:25.905 "uuid": "044fcf2c-7aed-4fa0-987b-71d97c0abcc6", 00:24:25.905 "numa_id": 0, 00:24:25.905 "assigned_rate_limits": { 00:24:25.905 "rw_ios_per_sec": 0, 00:24:25.905 "rw_mbytes_per_sec": 0, 00:24:25.905 "r_mbytes_per_sec": 0, 00:24:25.905 "w_mbytes_per_sec": 0 00:24:25.905 }, 00:24:25.905 "claimed": false, 00:24:25.905 "zoned": false, 00:24:25.905 "supported_io_types": { 00:24:25.905 "read": true, 00:24:25.905 "write": true, 00:24:25.905 "unmap": false, 00:24:25.905 "flush": true, 00:24:25.905 "reset": true, 00:24:25.905 "nvme_admin": true, 00:24:25.905 "nvme_io": true, 00:24:25.905 "nvme_io_md": false, 00:24:25.905 "write_zeroes": true, 00:24:25.905 "zcopy": false, 00:24:25.905 "get_zone_info": false, 00:24:25.905 "zone_management": false, 00:24:25.905 "zone_append": false, 00:24:25.905 "compare": true, 00:24:25.905 "compare_and_write": true, 00:24:25.905 "abort": true, 00:24:25.905 "seek_hole": false, 00:24:25.905 "seek_data": false, 00:24:25.905 "copy": true, 00:24:25.905 "nvme_iov_md": false 00:24:25.905 }, 00:24:25.905 "memory_domains": [ 00:24:25.905 { 00:24:25.905 "dma_device_id": "system", 00:24:25.905 "dma_device_type": 1 00:24:25.905 } 00:24:25.905 ], 00:24:25.905 "driver_specific": { 00:24:25.905 "nvme": [ 00:24:25.905 { 00:24:25.905 "trid": { 00:24:25.905 "trtype": "TCP", 00:24:25.905 "adrfam": "IPv4", 00:24:25.905 "traddr": "10.0.0.2", 00:24:25.905 "trsvcid": "4420", 00:24:25.905 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:25.905 }, 00:24:25.905 "ctrlr_data": { 00:24:25.905 "cntlid": 1, 00:24:25.905 "vendor_id": "0x8086", 00:24:25.905 "model_number": "SPDK bdev Controller", 00:24:25.905 "serial_number": "00000000000000000000", 00:24:25.905 "firmware_revision": "25.01", 00:24:25.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.905 "oacs": { 00:24:25.905 "security": 0, 00:24:25.905 "format": 0, 00:24:25.905 "firmware": 0, 00:24:25.905 "ns_manage": 0 00:24:25.905 }, 00:24:25.905 "multi_ctrlr": true, 00:24:25.905 "ana_reporting": false 00:24:25.905 }, 00:24:25.905 "vs": { 00:24:25.905 "nvme_version": "1.3" 00:24:25.905 }, 00:24:25.905 "ns_data": { 00:24:25.905 "id": 1, 00:24:25.905 "can_share": true 00:24:25.905 } 00:24:25.905 } 00:24:25.905 ], 00:24:25.905 "mp_policy": "active_passive" 00:24:25.905 } 00:24:25.905 } 00:24:25.905 ] 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.905 [2024-11-20 07:25:00.503174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:25.905 [2024-11-20 07:25:00.503240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9a050 (9): Bad file descriptor 00:24:25.905 [2024-11-20 07:25:00.634958] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.905 [ 00:24:25.905 { 00:24:25.905 "name": "nvme0n1", 00:24:25.905 "aliases": [ 00:24:25.905 "044fcf2c-7aed-4fa0-987b-71d97c0abcc6" 00:24:25.905 ], 00:24:25.905 "product_name": "NVMe disk", 00:24:25.905 "block_size": 512, 00:24:25.905 "num_blocks": 2097152, 00:24:25.905 "uuid": "044fcf2c-7aed-4fa0-987b-71d97c0abcc6", 00:24:25.905 "numa_id": 0, 00:24:25.905 "assigned_rate_limits": { 00:24:25.905 "rw_ios_per_sec": 0, 00:24:25.905 "rw_mbytes_per_sec": 0, 00:24:25.905 "r_mbytes_per_sec": 0, 00:24:25.905 "w_mbytes_per_sec": 0 00:24:25.905 }, 00:24:25.905 "claimed": false, 00:24:25.905 "zoned": false, 00:24:25.905 "supported_io_types": { 00:24:25.905 "read": true, 00:24:25.905 "write": true, 00:24:25.905 "unmap": false, 00:24:25.905 "flush": true, 00:24:25.905 "reset": true, 00:24:25.905 "nvme_admin": true, 00:24:25.905 "nvme_io": true, 00:24:25.905 "nvme_io_md": false, 00:24:25.905 "write_zeroes": true, 00:24:25.905 "zcopy": false, 00:24:25.905 "get_zone_info": false, 00:24:25.905 "zone_management": false, 00:24:25.905 "zone_append": false, 00:24:25.905 "compare": true, 00:24:25.905 "compare_and_write": true, 00:24:25.905 "abort": true, 00:24:25.905 "seek_hole": false, 00:24:25.905 "seek_data": false, 00:24:25.905 "copy": true, 00:24:25.905 "nvme_iov_md": false 00:24:25.905 }, 00:24:25.905 "memory_domains": [ 00:24:25.905 { 00:24:25.905 "dma_device_id": "system", 00:24:25.905 "dma_device_type": 1 00:24:25.905 } 00:24:25.905 ], 00:24:25.905 "driver_specific": { 00:24:25.905 "nvme": [ 00:24:25.905 { 00:24:25.905 "trid": { 00:24:25.905 "trtype": "TCP", 00:24:25.905 "adrfam": "IPv4", 00:24:25.905 "traddr": "10.0.0.2", 00:24:25.905 "trsvcid": "4420", 00:24:25.905 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:25.905 }, 00:24:25.905 "ctrlr_data": { 00:24:25.905 "cntlid": 2, 00:24:25.905 "vendor_id": "0x8086", 00:24:25.905 "model_number": "SPDK bdev Controller", 00:24:25.905 "serial_number": "00000000000000000000", 00:24:25.905 "firmware_revision": "25.01", 00:24:25.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.905 "oacs": { 00:24:25.905 "security": 0, 00:24:25.905 "format": 0, 00:24:25.905 "firmware": 0, 00:24:25.905 "ns_manage": 0 00:24:25.905 }, 00:24:25.905 "multi_ctrlr": true, 00:24:25.905 "ana_reporting": false 00:24:25.905 }, 00:24:25.905 "vs": { 00:24:25.905 "nvme_version": "1.3" 00:24:25.905 }, 00:24:25.905 "ns_data": { 00:24:25.905 "id": 1, 00:24:25.905 "can_share": true 00:24:25.905 } 00:24:25.905 } 00:24:25.905 ], 00:24:25.905 "mp_policy": "active_passive" 00:24:25.905 } 00:24:25.905 } 00:24:25.905 ] 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.905 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.hLu5a2sQgS 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.hLu5a2sQgS 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.hLu5a2sQgS 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.167 [2024-11-20 07:25:00.723871] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:26.167 [2024-11-20 07:25:00.723988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.167 [2024-11-20 07:25:00.747950] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:26.167 nvme0n1 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.167 [ 00:24:26.167 { 00:24:26.167 "name": "nvme0n1", 00:24:26.167 "aliases": [ 00:24:26.167 "044fcf2c-7aed-4fa0-987b-71d97c0abcc6" 00:24:26.167 ], 00:24:26.167 "product_name": "NVMe disk", 00:24:26.167 "block_size": 512, 00:24:26.167 "num_blocks": 2097152, 00:24:26.167 "uuid": "044fcf2c-7aed-4fa0-987b-71d97c0abcc6", 00:24:26.167 "numa_id": 0, 00:24:26.167 "assigned_rate_limits": { 00:24:26.167 "rw_ios_per_sec": 0, 00:24:26.167 "rw_mbytes_per_sec": 0, 00:24:26.167 "r_mbytes_per_sec": 0, 00:24:26.167 "w_mbytes_per_sec": 0 00:24:26.167 }, 00:24:26.167 "claimed": false, 00:24:26.167 "zoned": false, 00:24:26.167 "supported_io_types": { 00:24:26.167 "read": true, 00:24:26.167 "write": true, 00:24:26.167 "unmap": false, 00:24:26.167 "flush": true, 00:24:26.167 "reset": true, 00:24:26.167 "nvme_admin": true, 00:24:26.167 "nvme_io": true, 00:24:26.167 "nvme_io_md": false, 00:24:26.167 "write_zeroes": true, 00:24:26.167 "zcopy": false, 00:24:26.167 "get_zone_info": false, 00:24:26.167 "zone_management": false, 00:24:26.167 "zone_append": false, 00:24:26.167 "compare": true, 00:24:26.167 "compare_and_write": true, 00:24:26.167 "abort": true, 00:24:26.167 "seek_hole": false, 00:24:26.167 "seek_data": false, 00:24:26.167 "copy": true, 00:24:26.167 "nvme_iov_md": false 00:24:26.167 }, 00:24:26.167 "memory_domains": [ 00:24:26.167 { 00:24:26.167 "dma_device_id": "system", 00:24:26.167 "dma_device_type": 1 00:24:26.167 } 00:24:26.167 ], 00:24:26.167 "driver_specific": { 00:24:26.167 "nvme": [ 00:24:26.167 { 00:24:26.167 "trid": { 00:24:26.167 "trtype": "TCP", 00:24:26.167 "adrfam": "IPv4", 00:24:26.167 "traddr": "10.0.0.2", 00:24:26.167 "trsvcid": "4421", 00:24:26.167 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:26.167 }, 00:24:26.167 "ctrlr_data": { 00:24:26.167 "cntlid": 3, 00:24:26.167 "vendor_id": "0x8086", 00:24:26.167 "model_number": "SPDK bdev Controller", 00:24:26.167 "serial_number": "00000000000000000000", 00:24:26.167 "firmware_revision": "25.01", 00:24:26.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:26.167 "oacs": { 00:24:26.167 "security": 0, 00:24:26.167 "format": 0, 00:24:26.167 "firmware": 0, 00:24:26.167 "ns_manage": 0 00:24:26.167 }, 00:24:26.167 "multi_ctrlr": true, 00:24:26.167 "ana_reporting": false 00:24:26.167 }, 00:24:26.167 "vs": { 00:24:26.167 "nvme_version": "1.3" 00:24:26.167 }, 00:24:26.167 "ns_data": { 00:24:26.167 "id": 1, 00:24:26.167 "can_share": true 00:24:26.167 } 00:24:26.167 } 00:24:26.167 ], 00:24:26.167 "mp_policy": "active_passive" 00:24:26.167 } 00:24:26.167 } 00:24:26.167 ] 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.hLu5a2sQgS 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:26.167 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:26.168 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:26.168 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:26.168 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:26.168 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:26.168 rmmod nvme_tcp 00:24:26.168 rmmod nvme_fabrics 00:24:26.168 rmmod nvme_keyring 00:24:26.428 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:26.428 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:26.428 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:26.428 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1379808 ']' 00:24:26.428 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1379808 00:24:26.428 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 1379808 ']' 00:24:26.428 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 1379808 00:24:26.428 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:24:26.428 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:26.428 07:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1379808 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1379808' 00:24:26.428 killing process with pid 1379808 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 1379808 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 1379808 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.428 07:25:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:28.978 00:24:28.978 real 0m12.707s 00:24:28.978 user 0m4.509s 00:24:28.978 sys 0m6.717s 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.978 ************************************ 00:24:28.978 END TEST nvmf_async_init 00:24:28.978 ************************************ 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.978 ************************************ 00:24:28.978 START TEST dma 00:24:28.978 ************************************ 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:28.978 * Looking for test storage... 00:24:28.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:28.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.978 --rc genhtml_branch_coverage=1 00:24:28.978 --rc genhtml_function_coverage=1 00:24:28.978 --rc genhtml_legend=1 00:24:28.978 --rc geninfo_all_blocks=1 00:24:28.978 --rc geninfo_unexecuted_blocks=1 00:24:28.978 00:24:28.978 ' 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:28.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.978 --rc genhtml_branch_coverage=1 00:24:28.978 --rc genhtml_function_coverage=1 00:24:28.978 --rc genhtml_legend=1 00:24:28.978 --rc geninfo_all_blocks=1 00:24:28.978 --rc geninfo_unexecuted_blocks=1 00:24:28.978 00:24:28.978 ' 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:28.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.978 --rc genhtml_branch_coverage=1 00:24:28.978 --rc genhtml_function_coverage=1 00:24:28.978 --rc genhtml_legend=1 00:24:28.978 --rc geninfo_all_blocks=1 00:24:28.978 --rc geninfo_unexecuted_blocks=1 00:24:28.978 00:24:28.978 ' 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:28.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.978 --rc genhtml_branch_coverage=1 00:24:28.978 --rc genhtml_function_coverage=1 00:24:28.978 --rc genhtml_legend=1 00:24:28.978 --rc geninfo_all_blocks=1 00:24:28.978 --rc geninfo_unexecuted_blocks=1 00:24:28.978 00:24:28.978 ' 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.978 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:28.979 00:24:28.979 real 0m0.211s 00:24:28.979 user 0m0.131s 00:24:28.979 sys 0m0.088s 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:28.979 ************************************ 00:24:28.979 END TEST dma 00:24:28.979 ************************************ 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.979 ************************************ 00:24:28.979 START TEST nvmf_identify 00:24:28.979 ************************************ 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:28.979 * Looking for test storage... 00:24:28.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:24:28.979 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:29.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.241 --rc genhtml_branch_coverage=1 00:24:29.241 --rc genhtml_function_coverage=1 00:24:29.241 --rc genhtml_legend=1 00:24:29.241 --rc geninfo_all_blocks=1 00:24:29.241 --rc geninfo_unexecuted_blocks=1 00:24:29.241 00:24:29.241 ' 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:29.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.241 --rc genhtml_branch_coverage=1 00:24:29.241 --rc genhtml_function_coverage=1 00:24:29.241 --rc genhtml_legend=1 00:24:29.241 --rc geninfo_all_blocks=1 00:24:29.241 --rc geninfo_unexecuted_blocks=1 00:24:29.241 00:24:29.241 ' 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:29.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.241 --rc genhtml_branch_coverage=1 00:24:29.241 --rc genhtml_function_coverage=1 00:24:29.241 --rc genhtml_legend=1 00:24:29.241 --rc geninfo_all_blocks=1 00:24:29.241 --rc geninfo_unexecuted_blocks=1 00:24:29.241 00:24:29.241 ' 00:24:29.241 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:29.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.241 --rc genhtml_branch_coverage=1 00:24:29.241 --rc genhtml_function_coverage=1 00:24:29.241 --rc genhtml_legend=1 00:24:29.242 --rc geninfo_all_blocks=1 00:24:29.242 --rc geninfo_unexecuted_blocks=1 00:24:29.242 00:24:29.242 ' 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:29.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:29.242 07:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:37.387 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.387 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:37.388 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:37.388 Found net devices under 0000:31:00.0: cvl_0_0 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:37.388 Found net devices under 0000:31:00.1: cvl_0_1 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.388 07:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.388 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.388 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.388 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:37.388 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:37.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:24:37.650 00:24:37.650 --- 10.0.0.2 ping statistics --- 00:24:37.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.650 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:24:37.650 00:24:37.650 --- 10.0.0.1 ping statistics --- 00:24:37.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.650 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1384905 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1384905 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 1384905 ']' 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:37.650 07:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.650 [2024-11-20 07:25:12.320044] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:24:37.650 [2024-11-20 07:25:12.320115] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.650 [2024-11-20 07:25:12.411348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.912 [2024-11-20 07:25:12.453974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.912 [2024-11-20 07:25:12.454011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.912 [2024-11-20 07:25:12.454019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.912 [2024-11-20 07:25:12.454027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.912 [2024-11-20 07:25:12.454033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.912 [2024-11-20 07:25:12.455553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.912 [2024-11-20 07:25:12.455673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.912 [2024-11-20 07:25:12.455829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.912 [2024-11-20 07:25:12.455829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.484 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:38.484 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:24:38.484 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:38.484 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.484 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.484 [2024-11-20 07:25:13.111575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.485 Malloc0 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.485 [2024-11-20 07:25:13.221258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.485 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.749 [ 00:24:38.749 { 00:24:38.749 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:38.749 "subtype": "Discovery", 00:24:38.749 "listen_addresses": [ 00:24:38.749 { 00:24:38.749 "trtype": "TCP", 00:24:38.749 "adrfam": "IPv4", 00:24:38.749 "traddr": "10.0.0.2", 00:24:38.749 "trsvcid": "4420" 00:24:38.749 } 00:24:38.749 ], 00:24:38.749 "allow_any_host": true, 00:24:38.749 "hosts": [] 00:24:38.749 }, 00:24:38.749 { 00:24:38.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.749 "subtype": "NVMe", 00:24:38.749 "listen_addresses": [ 00:24:38.749 { 00:24:38.749 "trtype": "TCP", 00:24:38.749 "adrfam": "IPv4", 00:24:38.749 "traddr": "10.0.0.2", 00:24:38.750 "trsvcid": "4420" 00:24:38.750 } 00:24:38.750 ], 00:24:38.750 "allow_any_host": true, 00:24:38.750 "hosts": [], 00:24:38.750 "serial_number": "SPDK00000000000001", 00:24:38.750 "model_number": "SPDK bdev Controller", 00:24:38.750 "max_namespaces": 32, 00:24:38.750 "min_cntlid": 1, 00:24:38.750 "max_cntlid": 65519, 00:24:38.750 "namespaces": [ 00:24:38.750 { 00:24:38.750 "nsid": 1, 00:24:38.750 "bdev_name": "Malloc0", 00:24:38.750 "name": "Malloc0", 00:24:38.750 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:38.750 "eui64": "ABCDEF0123456789", 00:24:38.750 "uuid": "bbdcba04-513e-45b7-85d4-fb8f2f8f2440" 00:24:38.750 } 00:24:38.750 ] 00:24:38.750 } 00:24:38.750 ] 00:24:38.750 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.750 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:38.750 [2024-11-20 07:25:13.283588] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:24:38.750 [2024-11-20 07:25:13.283631] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385185 ] 00:24:38.750 [2024-11-20 07:25:13.340117] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:38.750 [2024-11-20 07:25:13.340173] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:38.750 [2024-11-20 07:25:13.340180] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:38.750 [2024-11-20 07:25:13.340195] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:38.750 [2024-11-20 07:25:13.340206] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:38.750 [2024-11-20 07:25:13.340887] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:38.750 [2024-11-20 07:25:13.340922] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11ae550 0 00:24:38.750 [2024-11-20 07:25:13.346881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:38.750 [2024-11-20 07:25:13.346894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:38.750 [2024-11-20 07:25:13.346900] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:38.750 [2024-11-20 07:25:13.346903] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:38.750 [2024-11-20 07:25:13.346937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.346943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.346947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ae550) 00:24:38.750 [2024-11-20 07:25:13.346961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:38.750 [2024-11-20 07:25:13.346979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210100, cid 0, qid 0 00:24:38.750 [2024-11-20 07:25:13.352874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.750 [2024-11-20 07:25:13.352884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.750 [2024-11-20 07:25:13.352888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.352893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210100) on tqpair=0x11ae550 00:24:38.750 [2024-11-20 07:25:13.352903] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:38.750 [2024-11-20 07:25:13.352911] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:38.750 [2024-11-20 07:25:13.352916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:38.750 [2024-11-20 07:25:13.352930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.352934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.352938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ae550) 00:24:38.750 [2024-11-20 07:25:13.352946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.750 [2024-11-20 07:25:13.352959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210100, cid 0, qid 0 00:24:38.750 [2024-11-20 07:25:13.353181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.750 [2024-11-20 07:25:13.353188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.750 [2024-11-20 07:25:13.353192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.353196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210100) on tqpair=0x11ae550 00:24:38.750 [2024-11-20 07:25:13.353205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:38.750 [2024-11-20 07:25:13.353213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:38.750 [2024-11-20 07:25:13.353220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.353224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.353227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ae550) 00:24:38.750 [2024-11-20 07:25:13.353234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.750 [2024-11-20 07:25:13.353245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210100, cid 0, qid 0 00:24:38.750 [2024-11-20 07:25:13.353424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.750 [2024-11-20 07:25:13.353431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.750 [2024-11-20 07:25:13.353434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.353438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210100) on tqpair=0x11ae550 00:24:38.750 [2024-11-20 07:25:13.353443] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:38.750 [2024-11-20 07:25:13.353452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:38.750 [2024-11-20 07:25:13.353458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.353462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.353466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ae550) 00:24:38.750 [2024-11-20 07:25:13.353473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.750 [2024-11-20 07:25:13.353483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210100, cid 0, qid 0 00:24:38.750 [2024-11-20 07:25:13.353678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.750 [2024-11-20 07:25:13.353685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.750 [2024-11-20 07:25:13.353688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.353692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210100) on tqpair=0x11ae550 00:24:38.750 [2024-11-20 07:25:13.353697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:38.750 [2024-11-20 07:25:13.353707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.353711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.353714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ae550) 00:24:38.750 [2024-11-20 07:25:13.353721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.750 [2024-11-20 07:25:13.353731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210100, cid 0, qid 0 00:24:38.750 [2024-11-20 07:25:13.353931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.750 [2024-11-20 07:25:13.353938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.750 [2024-11-20 07:25:13.353941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.353945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210100) on tqpair=0x11ae550 00:24:38.750 [2024-11-20 07:25:13.353950] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:38.750 [2024-11-20 07:25:13.353957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:38.750 [2024-11-20 07:25:13.353965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:38.750 [2024-11-20 07:25:13.354074] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:38.750 [2024-11-20 07:25:13.354079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:38.750 [2024-11-20 07:25:13.354088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.354092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.354096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ae550) 00:24:38.750 [2024-11-20 07:25:13.354102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.750 [2024-11-20 07:25:13.354113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210100, cid 0, qid 0 00:24:38.750 [2024-11-20 07:25:13.354281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.750 [2024-11-20 07:25:13.354287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.750 [2024-11-20 07:25:13.354291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.354294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210100) on tqpair=0x11ae550 00:24:38.750 [2024-11-20 07:25:13.354299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:38.750 [2024-11-20 07:25:13.354308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.354312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.750 [2024-11-20 07:25:13.354316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ae550) 00:24:38.750 [2024-11-20 07:25:13.354323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.751 [2024-11-20 07:25:13.354333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210100, cid 0, qid 0 00:24:38.751 [2024-11-20 07:25:13.354493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.751 [2024-11-20 07:25:13.354500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.751 [2024-11-20 07:25:13.354503] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.354507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210100) on tqpair=0x11ae550 00:24:38.751 [2024-11-20 07:25:13.354512] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:38.751 [2024-11-20 07:25:13.354517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:38.751 [2024-11-20 07:25:13.354524] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:38.751 [2024-11-20 07:25:13.354534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:38.751 [2024-11-20 07:25:13.354544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.354547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ae550) 00:24:38.751 [2024-11-20 07:25:13.354555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.751 [2024-11-20 07:25:13.354565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210100, cid 0, qid 0 00:24:38.751 [2024-11-20 07:25:13.354843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.751 [2024-11-20 07:25:13.354850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.751 [2024-11-20 07:25:13.354854] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.354858] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ae550): datao=0, datal=4096, cccid=0 00:24:38.751 [2024-11-20 07:25:13.354868] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1210100) on tqpair(0x11ae550): expected_datao=0, payload_size=4096 00:24:38.751 [2024-11-20 07:25:13.354873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.354881] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.354885] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.751 [2024-11-20 07:25:13.355030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.751 [2024-11-20 07:25:13.355033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210100) on tqpair=0x11ae550 00:24:38.751 [2024-11-20 07:25:13.355045] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:38.751 [2024-11-20 07:25:13.355050] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:38.751 [2024-11-20 07:25:13.355055] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:38.751 [2024-11-20 07:25:13.355063] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:38.751 [2024-11-20 07:25:13.355067] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:38.751 [2024-11-20 07:25:13.355072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:38.751 [2024-11-20 07:25:13.355082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:38.751 [2024-11-20 07:25:13.355089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ae550) 00:24:38.751 [2024-11-20 07:25:13.355104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:38.751 [2024-11-20 07:25:13.355115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210100, cid 0, qid 0 00:24:38.751 [2024-11-20 07:25:13.355290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.751 [2024-11-20 07:25:13.355296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.751 [2024-11-20 07:25:13.355300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210100) on tqpair=0x11ae550 00:24:38.751 [2024-11-20 07:25:13.355311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11ae550) 00:24:38.751 [2024-11-20 07:25:13.355325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.751 [2024-11-20 07:25:13.355331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11ae550) 00:24:38.751 [2024-11-20 07:25:13.355347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.751 [2024-11-20 07:25:13.355353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11ae550) 00:24:38.751 [2024-11-20 07:25:13.355366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.751 [2024-11-20 07:25:13.355372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.751 [2024-11-20 07:25:13.355385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.751 [2024-11-20 07:25:13.355390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:38.751 [2024-11-20 07:25:13.355398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:38.751 [2024-11-20 07:25:13.355405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ae550) 00:24:38.751 [2024-11-20 07:25:13.355415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.751 [2024-11-20 07:25:13.355427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210100, cid 0, qid 0 00:24:38.751 [2024-11-20 07:25:13.355433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210280, cid 1, qid 0 00:24:38.751 [2024-11-20 07:25:13.355437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210400, cid 2, qid 0 00:24:38.751 [2024-11-20 07:25:13.355442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.751 [2024-11-20 07:25:13.355447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210700, cid 4, qid 0 00:24:38.751 [2024-11-20 07:25:13.355714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.751 [2024-11-20 07:25:13.355720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.751 [2024-11-20 07:25:13.355724] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210700) on tqpair=0x11ae550 00:24:38.751 [2024-11-20 07:25:13.355735] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:38.751 [2024-11-20 07:25:13.355740] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:38.751 [2024-11-20 07:25:13.355751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ae550) 00:24:38.751 [2024-11-20 07:25:13.355761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.751 [2024-11-20 07:25:13.355771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210700, cid 4, qid 0 00:24:38.751 [2024-11-20 07:25:13.355944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.751 [2024-11-20 07:25:13.355952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.751 [2024-11-20 07:25:13.355957] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355961] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ae550): datao=0, datal=4096, cccid=4 00:24:38.751 [2024-11-20 07:25:13.355965] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1210700) on tqpair(0x11ae550): expected_datao=0, payload_size=4096 00:24:38.751 [2024-11-20 07:25:13.355970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355987] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.355991] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.397057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.751 [2024-11-20 07:25:13.397069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.751 [2024-11-20 07:25:13.397072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.397077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210700) on tqpair=0x11ae550 00:24:38.751 [2024-11-20 07:25:13.397089] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:38.751 [2024-11-20 07:25:13.397113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.397117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ae550) 00:24:38.751 [2024-11-20 07:25:13.397125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.751 [2024-11-20 07:25:13.397132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.751 [2024-11-20 07:25:13.397136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.397139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11ae550) 00:24:38.752 [2024-11-20 07:25:13.397145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.752 [2024-11-20 07:25:13.397160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210700, cid 4, qid 0 00:24:38.752 [2024-11-20 07:25:13.397166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210880, cid 5, qid 0 00:24:38.752 [2024-11-20 07:25:13.397364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.752 [2024-11-20 07:25:13.397371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.752 [2024-11-20 07:25:13.397374] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.397378] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ae550): datao=0, datal=1024, cccid=4 00:24:38.752 [2024-11-20 07:25:13.397382] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1210700) on tqpair(0x11ae550): expected_datao=0, payload_size=1024 00:24:38.752 [2024-11-20 07:25:13.397387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.397393] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.397397] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.397403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.752 [2024-11-20 07:25:13.397409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.752 [2024-11-20 07:25:13.397412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.397416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210880) on tqpair=0x11ae550 00:24:38.752 [2024-11-20 07:25:13.441870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.752 [2024-11-20 07:25:13.441879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.752 [2024-11-20 07:25:13.441883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.441887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210700) on tqpair=0x11ae550 00:24:38.752 [2024-11-20 07:25:13.441898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.441904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ae550) 00:24:38.752 [2024-11-20 07:25:13.441911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.752 [2024-11-20 07:25:13.441927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210700, cid 4, qid 0 00:24:38.752 [2024-11-20 07:25:13.442138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.752 [2024-11-20 07:25:13.442145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.752 [2024-11-20 07:25:13.442148] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.442152] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ae550): datao=0, datal=3072, cccid=4 00:24:38.752 [2024-11-20 07:25:13.442156] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1210700) on tqpair(0x11ae550): expected_datao=0, payload_size=3072 00:24:38.752 [2024-11-20 07:25:13.442161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.442168] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.442171] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.442318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.752 [2024-11-20 07:25:13.442324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.752 [2024-11-20 07:25:13.442328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.442332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210700) on tqpair=0x11ae550 00:24:38.752 [2024-11-20 07:25:13.442340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.442344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11ae550) 00:24:38.752 [2024-11-20 07:25:13.442350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.752 [2024-11-20 07:25:13.442364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210700, cid 4, qid 0 00:24:38.752 [2024-11-20 07:25:13.442636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.752 [2024-11-20 07:25:13.442643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.752 [2024-11-20 07:25:13.442646] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.442650] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11ae550): datao=0, datal=8, cccid=4 00:24:38.752 [2024-11-20 07:25:13.442654] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1210700) on tqpair(0x11ae550): expected_datao=0, payload_size=8 00:24:38.752 [2024-11-20 07:25:13.442659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.442665] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.442669] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.483036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.752 [2024-11-20 07:25:13.483045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.752 [2024-11-20 07:25:13.483049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.752 [2024-11-20 07:25:13.483053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210700) on tqpair=0x11ae550 00:24:38.752 ===================================================== 00:24:38.752 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:38.752 ===================================================== 00:24:38.752 Controller Capabilities/Features 00:24:38.752 ================================ 00:24:38.752 Vendor ID: 0000 00:24:38.752 Subsystem Vendor ID: 0000 00:24:38.752 Serial Number: .................... 00:24:38.752 Model Number: ........................................ 00:24:38.752 Firmware Version: 25.01 00:24:38.752 Recommended Arb Burst: 0 00:24:38.752 IEEE OUI Identifier: 00 00 00 00:24:38.752 Multi-path I/O 00:24:38.752 May have multiple subsystem ports: No 00:24:38.752 May have multiple controllers: No 00:24:38.752 Associated with SR-IOV VF: No 00:24:38.752 Max Data Transfer Size: 131072 00:24:38.752 Max Number of Namespaces: 0 00:24:38.752 Max Number of I/O Queues: 1024 00:24:38.752 NVMe Specification Version (VS): 1.3 00:24:38.752 NVMe Specification Version (Identify): 1.3 00:24:38.752 Maximum Queue Entries: 128 00:24:38.752 Contiguous Queues Required: Yes 00:24:38.752 Arbitration Mechanisms Supported 00:24:38.752 Weighted Round Robin: Not Supported 00:24:38.752 Vendor Specific: Not Supported 00:24:38.752 Reset Timeout: 15000 ms 00:24:38.752 Doorbell Stride: 4 bytes 00:24:38.752 NVM Subsystem Reset: Not Supported 00:24:38.752 Command Sets Supported 00:24:38.752 NVM Command Set: Supported 00:24:38.752 Boot Partition: Not Supported 00:24:38.752 Memory Page Size Minimum: 4096 bytes 00:24:38.752 Memory Page Size Maximum: 4096 bytes 00:24:38.752 Persistent Memory Region: Not Supported 00:24:38.752 Optional Asynchronous Events Supported 00:24:38.752 Namespace Attribute Notices: Not Supported 00:24:38.752 Firmware Activation Notices: Not Supported 00:24:38.752 ANA Change Notices: Not Supported 00:24:38.752 PLE Aggregate Log Change Notices: Not Supported 00:24:38.752 LBA Status Info Alert Notices: Not Supported 00:24:38.752 EGE Aggregate Log Change Notices: Not Supported 00:24:38.752 Normal NVM Subsystem Shutdown event: Not Supported 00:24:38.752 Zone Descriptor Change Notices: Not Supported 00:24:38.752 Discovery Log Change Notices: Supported 00:24:38.752 Controller Attributes 00:24:38.752 128-bit Host Identifier: Not Supported 00:24:38.752 Non-Operational Permissive Mode: Not Supported 00:24:38.752 NVM Sets: Not Supported 00:24:38.752 Read Recovery Levels: Not Supported 00:24:38.752 Endurance Groups: Not Supported 00:24:38.752 Predictable Latency Mode: Not Supported 00:24:38.752 Traffic Based Keep ALive: Not Supported 00:24:38.752 Namespace Granularity: Not Supported 00:24:38.752 SQ Associations: Not Supported 00:24:38.752 UUID List: Not Supported 00:24:38.752 Multi-Domain Subsystem: Not Supported 00:24:38.752 Fixed Capacity Management: Not Supported 00:24:38.752 Variable Capacity Management: Not Supported 00:24:38.752 Delete Endurance Group: Not Supported 00:24:38.752 Delete NVM Set: Not Supported 00:24:38.752 Extended LBA Formats Supported: Not Supported 00:24:38.752 Flexible Data Placement Supported: Not Supported 00:24:38.752 00:24:38.752 Controller Memory Buffer Support 00:24:38.752 ================================ 00:24:38.752 Supported: No 00:24:38.752 00:24:38.752 Persistent Memory Region Support 00:24:38.752 ================================ 00:24:38.752 Supported: No 00:24:38.752 00:24:38.752 Admin Command Set Attributes 00:24:38.752 ============================ 00:24:38.752 Security Send/Receive: Not Supported 00:24:38.752 Format NVM: Not Supported 00:24:38.752 Firmware Activate/Download: Not Supported 00:24:38.752 Namespace Management: Not Supported 00:24:38.752 Device Self-Test: Not Supported 00:24:38.752 Directives: Not Supported 00:24:38.752 NVMe-MI: Not Supported 00:24:38.752 Virtualization Management: Not Supported 00:24:38.752 Doorbell Buffer Config: Not Supported 00:24:38.752 Get LBA Status Capability: Not Supported 00:24:38.752 Command & Feature Lockdown Capability: Not Supported 00:24:38.752 Abort Command Limit: 1 00:24:38.752 Async Event Request Limit: 4 00:24:38.752 Number of Firmware Slots: N/A 00:24:38.752 Firmware Slot 1 Read-Only: N/A 00:24:38.752 Firmware Activation Without Reset: N/A 00:24:38.753 Multiple Update Detection Support: N/A 00:24:38.753 Firmware Update Granularity: No Information Provided 00:24:38.753 Per-Namespace SMART Log: No 00:24:38.753 Asymmetric Namespace Access Log Page: Not Supported 00:24:38.753 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:38.753 Command Effects Log Page: Not Supported 00:24:38.753 Get Log Page Extended Data: Supported 00:24:38.753 Telemetry Log Pages: Not Supported 00:24:38.753 Persistent Event Log Pages: Not Supported 00:24:38.753 Supported Log Pages Log Page: May Support 00:24:38.753 Commands Supported & Effects Log Page: Not Supported 00:24:38.753 Feature Identifiers & Effects Log Page:May Support 00:24:38.753 NVMe-MI Commands & Effects Log Page: May Support 00:24:38.753 Data Area 4 for Telemetry Log: Not Supported 00:24:38.753 Error Log Page Entries Supported: 128 00:24:38.753 Keep Alive: Not Supported 00:24:38.753 00:24:38.753 NVM Command Set Attributes 00:24:38.753 ========================== 00:24:38.753 Submission Queue Entry Size 00:24:38.753 Max: 1 00:24:38.753 Min: 1 00:24:38.753 Completion Queue Entry Size 00:24:38.753 Max: 1 00:24:38.753 Min: 1 00:24:38.753 Number of Namespaces: 0 00:24:38.753 Compare Command: Not Supported 00:24:38.753 Write Uncorrectable Command: Not Supported 00:24:38.753 Dataset Management Command: Not Supported 00:24:38.753 Write Zeroes Command: Not Supported 00:24:38.753 Set Features Save Field: Not Supported 00:24:38.753 Reservations: Not Supported 00:24:38.753 Timestamp: Not Supported 00:24:38.753 Copy: Not Supported 00:24:38.753 Volatile Write Cache: Not Present 00:24:38.753 Atomic Write Unit (Normal): 1 00:24:38.753 Atomic Write Unit (PFail): 1 00:24:38.753 Atomic Compare & Write Unit: 1 00:24:38.753 Fused Compare & Write: Supported 00:24:38.753 Scatter-Gather List 00:24:38.753 SGL Command Set: Supported 00:24:38.753 SGL Keyed: Supported 00:24:38.753 SGL Bit Bucket Descriptor: Not Supported 00:24:38.753 SGL Metadata Pointer: Not Supported 00:24:38.753 Oversized SGL: Not Supported 00:24:38.753 SGL Metadata Address: Not Supported 00:24:38.753 SGL Offset: Supported 00:24:38.753 Transport SGL Data Block: Not Supported 00:24:38.753 Replay Protected Memory Block: Not Supported 00:24:38.753 00:24:38.753 Firmware Slot Information 00:24:38.753 ========================= 00:24:38.753 Active slot: 0 00:24:38.753 00:24:38.753 00:24:38.753 Error Log 00:24:38.753 ========= 00:24:38.753 00:24:38.753 Active Namespaces 00:24:38.753 ================= 00:24:38.753 Discovery Log Page 00:24:38.753 ================== 00:24:38.753 Generation Counter: 2 00:24:38.753 Number of Records: 2 00:24:38.753 Record Format: 0 00:24:38.753 00:24:38.753 Discovery Log Entry 0 00:24:38.753 ---------------------- 00:24:38.753 Transport Type: 3 (TCP) 00:24:38.753 Address Family: 1 (IPv4) 00:24:38.753 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:38.753 Entry Flags: 00:24:38.753 Duplicate Returned Information: 1 00:24:38.753 Explicit Persistent Connection Support for Discovery: 1 00:24:38.753 Transport Requirements: 00:24:38.753 Secure Channel: Not Required 00:24:38.753 Port ID: 0 (0x0000) 00:24:38.753 Controller ID: 65535 (0xffff) 00:24:38.753 Admin Max SQ Size: 128 00:24:38.753 Transport Service Identifier: 4420 00:24:38.753 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:38.753 Transport Address: 10.0.0.2 00:24:38.753 Discovery Log Entry 1 00:24:38.753 ---------------------- 00:24:38.753 Transport Type: 3 (TCP) 00:24:38.753 Address Family: 1 (IPv4) 00:24:38.753 Subsystem Type: 2 (NVM Subsystem) 00:24:38.753 Entry Flags: 00:24:38.753 Duplicate Returned Information: 0 00:24:38.753 Explicit Persistent Connection Support for Discovery: 0 00:24:38.753 Transport Requirements: 00:24:38.753 Secure Channel: Not Required 00:24:38.753 Port ID: 0 (0x0000) 00:24:38.753 Controller ID: 65535 (0xffff) 00:24:38.753 Admin Max SQ Size: 128 00:24:38.753 Transport Service Identifier: 4420 00:24:38.753 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:38.753 Transport Address: 10.0.0.2 [2024-11-20 07:25:13.483138] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:38.753 [2024-11-20 07:25:13.483150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210100) on tqpair=0x11ae550 00:24:38.753 [2024-11-20 07:25:13.483157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.753 [2024-11-20 07:25:13.483162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210280) on tqpair=0x11ae550 00:24:38.753 [2024-11-20 07:25:13.483169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.753 [2024-11-20 07:25:13.483174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210400) on tqpair=0x11ae550 00:24:38.753 [2024-11-20 07:25:13.483179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.753 [2024-11-20 07:25:13.483184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.753 [2024-11-20 07:25:13.483188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.753 [2024-11-20 07:25:13.483199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.753 [2024-11-20 07:25:13.483203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.753 [2024-11-20 07:25:13.483207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.753 [2024-11-20 07:25:13.483214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.753 [2024-11-20 07:25:13.483228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.753 [2024-11-20 07:25:13.483345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.753 [2024-11-20 07:25:13.483351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.753 [2024-11-20 07:25:13.483355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.753 [2024-11-20 07:25:13.483359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.753 [2024-11-20 07:25:13.483366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.753 [2024-11-20 07:25:13.483370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.753 [2024-11-20 07:25:13.483373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.753 [2024-11-20 07:25:13.483380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.753 [2024-11-20 07:25:13.483393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.753 [2024-11-20 07:25:13.483575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.753 [2024-11-20 07:25:13.483581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.753 [2024-11-20 07:25:13.483585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.753 [2024-11-20 07:25:13.483589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.753 [2024-11-20 07:25:13.483594] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:38.753 [2024-11-20 07:25:13.483598] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:38.753 [2024-11-20 07:25:13.483608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.753 [2024-11-20 07:25:13.483612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.753 [2024-11-20 07:25:13.483615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.753 [2024-11-20 07:25:13.483622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.753 [2024-11-20 07:25:13.483632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.753 [2024-11-20 07:25:13.483796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.753 [2024-11-20 07:25:13.483803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.753 [2024-11-20 07:25:13.483806] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.753 [2024-11-20 07:25:13.483810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.753 [2024-11-20 07:25:13.483820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.753 [2024-11-20 07:25:13.483826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.753 [2024-11-20 07:25:13.483829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.753 [2024-11-20 07:25:13.483836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.753 [2024-11-20 07:25:13.483846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.753 [2024-11-20 07:25:13.484050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.753 [2024-11-20 07:25:13.484057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.753 [2024-11-20 07:25:13.484060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.753 [2024-11-20 07:25:13.484064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.753 [2024-11-20 07:25:13.484074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.753 [2024-11-20 07:25:13.484078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.484081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.754 [2024-11-20 07:25:13.484088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.754 [2024-11-20 07:25:13.484098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.754 [2024-11-20 07:25:13.484302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.754 [2024-11-20 07:25:13.484308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.754 [2024-11-20 07:25:13.484311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.484315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.754 [2024-11-20 07:25:13.484325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.484329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.484332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.754 [2024-11-20 07:25:13.484339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.754 [2024-11-20 07:25:13.484349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.754 [2024-11-20 07:25:13.484515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.754 [2024-11-20 07:25:13.484521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.754 [2024-11-20 07:25:13.484525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.484529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.754 [2024-11-20 07:25:13.484539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.484542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.484546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.754 [2024-11-20 07:25:13.484553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.754 [2024-11-20 07:25:13.484563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.754 [2024-11-20 07:25:13.484755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.754 [2024-11-20 07:25:13.484761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.754 [2024-11-20 07:25:13.484764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.484768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.754 [2024-11-20 07:25:13.484778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.484782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.484787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.754 [2024-11-20 07:25:13.484794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.754 [2024-11-20 07:25:13.484804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.754 [2024-11-20 07:25:13.485007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.754 [2024-11-20 07:25:13.485014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.754 [2024-11-20 07:25:13.485018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.485022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.754 [2024-11-20 07:25:13.485031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.485035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.485038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.754 [2024-11-20 07:25:13.485045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.754 [2024-11-20 07:25:13.485055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.754 [2024-11-20 07:25:13.485259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.754 [2024-11-20 07:25:13.485265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.754 [2024-11-20 07:25:13.485269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.485273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.754 [2024-11-20 07:25:13.485282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.485286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.485290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.754 [2024-11-20 07:25:13.485296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.754 [2024-11-20 07:25:13.485306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.754 [2024-11-20 07:25:13.485476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.754 [2024-11-20 07:25:13.485482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.754 [2024-11-20 07:25:13.485485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.485489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.754 [2024-11-20 07:25:13.485499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.485503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.485506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.754 [2024-11-20 07:25:13.485513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.754 [2024-11-20 07:25:13.485523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.754 [2024-11-20 07:25:13.485712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.754 [2024-11-20 07:25:13.485718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.754 [2024-11-20 07:25:13.485722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.485725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.754 [2024-11-20 07:25:13.485735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.485739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.485743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.754 [2024-11-20 07:25:13.485751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.754 [2024-11-20 07:25:13.485762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.754 [2024-11-20 07:25:13.489869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.754 [2024-11-20 07:25:13.489878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.754 [2024-11-20 07:25:13.489881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.489885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.754 [2024-11-20 07:25:13.489895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.489899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.489903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11ae550) 00:24:38.754 [2024-11-20 07:25:13.489909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.754 [2024-11-20 07:25:13.489921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1210580, cid 3, qid 0 00:24:38.754 [2024-11-20 07:25:13.490104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.754 [2024-11-20 07:25:13.490110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.754 [2024-11-20 07:25:13.490114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.754 [2024-11-20 07:25:13.490118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1210580) on tqpair=0x11ae550 00:24:38.754 [2024-11-20 07:25:13.490125] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:24:38.755 00:24:38.755 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:39.019 [2024-11-20 07:25:13.535190] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:24:39.019 [2024-11-20 07:25:13.535254] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385259 ] 00:24:39.019 [2024-11-20 07:25:13.587834] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:39.019 [2024-11-20 07:25:13.587889] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:39.019 [2024-11-20 07:25:13.587894] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:39.019 [2024-11-20 07:25:13.587912] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:39.019 [2024-11-20 07:25:13.587922] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:39.019 [2024-11-20 07:25:13.592064] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:39.019 [2024-11-20 07:25:13.592092] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ccf550 0 00:24:39.019 [2024-11-20 07:25:13.599874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:39.019 [2024-11-20 07:25:13.599886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:39.019 [2024-11-20 07:25:13.599890] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:39.019 [2024-11-20 07:25:13.599894] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:39.019 [2024-11-20 07:25:13.599923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.599932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.599936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ccf550) 00:24:39.020 [2024-11-20 07:25:13.599948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:39.020 [2024-11-20 07:25:13.599966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31100, cid 0, qid 0 00:24:39.020 [2024-11-20 07:25:13.607873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.020 [2024-11-20 07:25:13.607882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.020 [2024-11-20 07:25:13.607887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.607891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31100) on tqpair=0x1ccf550 00:24:39.020 [2024-11-20 07:25:13.607903] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:39.020 [2024-11-20 07:25:13.607910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:39.020 [2024-11-20 07:25:13.607915] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:39.020 [2024-11-20 07:25:13.607927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.607932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.607936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ccf550) 00:24:39.020 [2024-11-20 07:25:13.607944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.020 [2024-11-20 07:25:13.607957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31100, cid 0, qid 0 00:24:39.020 [2024-11-20 07:25:13.608131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.020 [2024-11-20 07:25:13.608137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.020 [2024-11-20 07:25:13.608141] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.608145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31100) on tqpair=0x1ccf550 00:24:39.020 [2024-11-20 07:25:13.608150] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:39.020 [2024-11-20 07:25:13.608158] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:39.020 [2024-11-20 07:25:13.608165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.608169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.608172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ccf550) 00:24:39.020 [2024-11-20 07:25:13.608179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.020 [2024-11-20 07:25:13.608190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31100, cid 0, qid 0 00:24:39.020 [2024-11-20 07:25:13.608343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.020 [2024-11-20 07:25:13.608350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.020 [2024-11-20 07:25:13.608354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.608358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31100) on tqpair=0x1ccf550 00:24:39.020 [2024-11-20 07:25:13.608363] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:39.020 [2024-11-20 07:25:13.608371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:39.020 [2024-11-20 07:25:13.608378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.608385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.608388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ccf550) 00:24:39.020 [2024-11-20 07:25:13.608395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.020 [2024-11-20 07:25:13.608406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31100, cid 0, qid 0 00:24:39.020 [2024-11-20 07:25:13.608562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.020 [2024-11-20 07:25:13.608569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.020 [2024-11-20 07:25:13.608572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.608577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31100) on tqpair=0x1ccf550 00:24:39.020 [2024-11-20 07:25:13.608582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:39.020 [2024-11-20 07:25:13.608591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.608595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.608599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ccf550) 00:24:39.020 [2024-11-20 07:25:13.608606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.020 [2024-11-20 07:25:13.608617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31100, cid 0, qid 0 00:24:39.020 [2024-11-20 07:25:13.608785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.020 [2024-11-20 07:25:13.608792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.020 [2024-11-20 07:25:13.608795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.608799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31100) on tqpair=0x1ccf550 00:24:39.020 [2024-11-20 07:25:13.608804] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:39.020 [2024-11-20 07:25:13.608809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:39.020 [2024-11-20 07:25:13.608817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:39.020 [2024-11-20 07:25:13.608925] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:39.020 [2024-11-20 07:25:13.608930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:39.020 [2024-11-20 07:25:13.608938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.608942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.608946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ccf550) 00:24:39.020 [2024-11-20 07:25:13.608953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.020 [2024-11-20 07:25:13.608964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31100, cid 0, qid 0 00:24:39.020 [2024-11-20 07:25:13.609133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.020 [2024-11-20 07:25:13.609140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.020 [2024-11-20 07:25:13.609143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.609148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31100) on tqpair=0x1ccf550 00:24:39.020 [2024-11-20 07:25:13.609152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:39.020 [2024-11-20 07:25:13.609162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.609169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.609173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ccf550) 00:24:39.020 [2024-11-20 07:25:13.609180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.020 [2024-11-20 07:25:13.609190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31100, cid 0, qid 0 00:24:39.020 [2024-11-20 07:25:13.609398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.020 [2024-11-20 07:25:13.609405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.020 [2024-11-20 07:25:13.609409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.609413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31100) on tqpair=0x1ccf550 00:24:39.020 [2024-11-20 07:25:13.609418] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:39.020 [2024-11-20 07:25:13.609423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:39.020 [2024-11-20 07:25:13.609430] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:39.020 [2024-11-20 07:25:13.609438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:39.020 [2024-11-20 07:25:13.609447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.609451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ccf550) 00:24:39.020 [2024-11-20 07:25:13.609458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.020 [2024-11-20 07:25:13.609468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31100, cid 0, qid 0 00:24:39.020 [2024-11-20 07:25:13.609664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:39.020 [2024-11-20 07:25:13.609671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:39.020 [2024-11-20 07:25:13.609675] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.609679] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ccf550): datao=0, datal=4096, cccid=0 00:24:39.020 [2024-11-20 07:25:13.609684] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d31100) on tqpair(0x1ccf550): expected_datao=0, payload_size=4096 00:24:39.020 [2024-11-20 07:25:13.609689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.609696] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.609700] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.650046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.020 [2024-11-20 07:25:13.650057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.020 [2024-11-20 07:25:13.650060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.020 [2024-11-20 07:25:13.650064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31100) on tqpair=0x1ccf550 00:24:39.020 [2024-11-20 07:25:13.650072] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:39.021 [2024-11-20 07:25:13.650076] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:39.021 [2024-11-20 07:25:13.650081] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:39.021 [2024-11-20 07:25:13.650090] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:39.021 [2024-11-20 07:25:13.650095] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:39.021 [2024-11-20 07:25:13.650103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:39.021 [2024-11-20 07:25:13.650113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:39.021 [2024-11-20 07:25:13.650120] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ccf550) 00:24:39.021 [2024-11-20 07:25:13.650135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:39.021 [2024-11-20 07:25:13.650147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31100, cid 0, qid 0 00:24:39.021 [2024-11-20 07:25:13.650327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.021 [2024-11-20 07:25:13.650333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.021 [2024-11-20 07:25:13.650337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31100) on tqpair=0x1ccf550 00:24:39.021 [2024-11-20 07:25:13.650347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ccf550) 00:24:39.021 [2024-11-20 07:25:13.650361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.021 [2024-11-20 07:25:13.650367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ccf550) 00:24:39.021 [2024-11-20 07:25:13.650381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.021 [2024-11-20 07:25:13.650387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ccf550) 00:24:39.021 [2024-11-20 07:25:13.650400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.021 [2024-11-20 07:25:13.650406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650410] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.021 [2024-11-20 07:25:13.650419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.021 [2024-11-20 07:25:13.650424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:39.021 [2024-11-20 07:25:13.650432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:39.021 [2024-11-20 07:25:13.650438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ccf550) 00:24:39.021 [2024-11-20 07:25:13.650449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.021 [2024-11-20 07:25:13.650460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31100, cid 0, qid 0 00:24:39.021 [2024-11-20 07:25:13.650467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31280, cid 1, qid 0 00:24:39.021 [2024-11-20 07:25:13.650472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31400, cid 2, qid 0 00:24:39.021 [2024-11-20 07:25:13.650477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.021 [2024-11-20 07:25:13.650482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31700, cid 4, qid 0 00:24:39.021 [2024-11-20 07:25:13.650667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.021 [2024-11-20 07:25:13.650674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.021 [2024-11-20 07:25:13.650677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31700) on tqpair=0x1ccf550 00:24:39.021 [2024-11-20 07:25:13.650688] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:39.021 [2024-11-20 07:25:13.650693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:39.021 [2024-11-20 07:25:13.650702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:39.021 [2024-11-20 07:25:13.650708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:39.021 [2024-11-20 07:25:13.650715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ccf550) 00:24:39.021 [2024-11-20 07:25:13.650729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:39.021 [2024-11-20 07:25:13.650739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31700, cid 4, qid 0 00:24:39.021 [2024-11-20 07:25:13.650893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.021 [2024-11-20 07:25:13.650900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.021 [2024-11-20 07:25:13.650903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31700) on tqpair=0x1ccf550 00:24:39.021 [2024-11-20 07:25:13.650973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:39.021 [2024-11-20 07:25:13.650982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:39.021 [2024-11-20 07:25:13.650990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.650993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ccf550) 00:24:39.021 [2024-11-20 07:25:13.651000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.021 [2024-11-20 07:25:13.651011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31700, cid 4, qid 0 00:24:39.021 [2024-11-20 07:25:13.651180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:39.021 [2024-11-20 07:25:13.651187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:39.021 [2024-11-20 07:25:13.651190] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.651194] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ccf550): datao=0, datal=4096, cccid=4 00:24:39.021 [2024-11-20 07:25:13.651199] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d31700) on tqpair(0x1ccf550): expected_datao=0, payload_size=4096 00:24:39.021 [2024-11-20 07:25:13.651205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.651220] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.651224] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.693871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.021 [2024-11-20 07:25:13.693882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.021 [2024-11-20 07:25:13.693886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.693890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31700) on tqpair=0x1ccf550 00:24:39.021 [2024-11-20 07:25:13.693900] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:39.021 [2024-11-20 07:25:13.693915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:39.021 [2024-11-20 07:25:13.693925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:39.021 [2024-11-20 07:25:13.693933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.693936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ccf550) 00:24:39.021 [2024-11-20 07:25:13.693943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.021 [2024-11-20 07:25:13.693956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31700, cid 4, qid 0 00:24:39.021 [2024-11-20 07:25:13.694120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:39.021 [2024-11-20 07:25:13.694127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:39.021 [2024-11-20 07:25:13.694130] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.694134] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ccf550): datao=0, datal=4096, cccid=4 00:24:39.021 [2024-11-20 07:25:13.694139] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d31700) on tqpair(0x1ccf550): expected_datao=0, payload_size=4096 00:24:39.021 [2024-11-20 07:25:13.694143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.694158] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.694162] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.735047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.021 [2024-11-20 07:25:13.735056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.021 [2024-11-20 07:25:13.735060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.021 [2024-11-20 07:25:13.735064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31700) on tqpair=0x1ccf550 00:24:39.021 [2024-11-20 07:25:13.735077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:39.022 [2024-11-20 07:25:13.735087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:39.022 [2024-11-20 07:25:13.735094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.735098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ccf550) 00:24:39.022 [2024-11-20 07:25:13.735105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.022 [2024-11-20 07:25:13.735116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31700, cid 4, qid 0 00:24:39.022 [2024-11-20 07:25:13.735329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:39.022 [2024-11-20 07:25:13.735335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:39.022 [2024-11-20 07:25:13.735339] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.735345] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ccf550): datao=0, datal=4096, cccid=4 00:24:39.022 [2024-11-20 07:25:13.735350] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d31700) on tqpair(0x1ccf550): expected_datao=0, payload_size=4096 00:24:39.022 [2024-11-20 07:25:13.735354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.735368] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.735372] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.776080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.022 [2024-11-20 07:25:13.776090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.022 [2024-11-20 07:25:13.776093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.776097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31700) on tqpair=0x1ccf550 00:24:39.022 [2024-11-20 07:25:13.776105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:39.022 [2024-11-20 07:25:13.776114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:39.022 [2024-11-20 07:25:13.776123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:39.022 [2024-11-20 07:25:13.776129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:39.022 [2024-11-20 07:25:13.776134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:39.022 [2024-11-20 07:25:13.776140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:39.022 [2024-11-20 07:25:13.776145] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:39.022 [2024-11-20 07:25:13.776150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:39.022 [2024-11-20 07:25:13.776155] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:39.022 [2024-11-20 07:25:13.776169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.776173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ccf550) 00:24:39.022 [2024-11-20 07:25:13.776180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.022 [2024-11-20 07:25:13.776187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.776191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.776194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ccf550) 00:24:39.022 [2024-11-20 07:25:13.776200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.022 [2024-11-20 07:25:13.776214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31700, cid 4, qid 0 00:24:39.022 [2024-11-20 07:25:13.776220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31880, cid 5, qid 0 00:24:39.022 [2024-11-20 07:25:13.776364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.022 [2024-11-20 07:25:13.776370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.022 [2024-11-20 07:25:13.776374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.776378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31700) on tqpair=0x1ccf550 00:24:39.022 [2024-11-20 07:25:13.776385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.022 [2024-11-20 07:25:13.776393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.022 [2024-11-20 07:25:13.776396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.776400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31880) on tqpair=0x1ccf550 00:24:39.022 [2024-11-20 07:25:13.776410] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.776413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ccf550) 00:24:39.022 [2024-11-20 07:25:13.776420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.022 [2024-11-20 07:25:13.776430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31880, cid 5, qid 0 00:24:39.022 [2024-11-20 07:25:13.776616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.022 [2024-11-20 07:25:13.776622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.022 [2024-11-20 07:25:13.776625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.776629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31880) on tqpair=0x1ccf550 00:24:39.022 [2024-11-20 07:25:13.776638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.776642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ccf550) 00:24:39.022 [2024-11-20 07:25:13.776649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.022 [2024-11-20 07:25:13.776658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31880, cid 5, qid 0 00:24:39.022 [2024-11-20 07:25:13.776835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.022 [2024-11-20 07:25:13.776842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.022 [2024-11-20 07:25:13.776845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.776849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31880) on tqpair=0x1ccf550 00:24:39.022 [2024-11-20 07:25:13.776858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.776867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ccf550) 00:24:39.022 [2024-11-20 07:25:13.776874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.022 [2024-11-20 07:25:13.776884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31880, cid 5, qid 0 00:24:39.022 [2024-11-20 07:25:13.777036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.022 [2024-11-20 07:25:13.777042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.022 [2024-11-20 07:25:13.777046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.777050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31880) on tqpair=0x1ccf550 00:24:39.022 [2024-11-20 07:25:13.777063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.777067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ccf550) 00:24:39.022 [2024-11-20 07:25:13.777074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.022 [2024-11-20 07:25:13.777082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.777085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ccf550) 00:24:39.022 [2024-11-20 07:25:13.777092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.022 [2024-11-20 07:25:13.777099] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.777104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ccf550) 00:24:39.022 [2024-11-20 07:25:13.777111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.022 [2024-11-20 07:25:13.777118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.777122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ccf550) 00:24:39.022 [2024-11-20 07:25:13.777128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.022 [2024-11-20 07:25:13.777139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31880, cid 5, qid 0 00:24:39.022 [2024-11-20 07:25:13.777145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31700, cid 4, qid 0 00:24:39.022 [2024-11-20 07:25:13.777149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31a00, cid 6, qid 0 00:24:39.022 [2024-11-20 07:25:13.777154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31b80, cid 7, qid 0 00:24:39.022 [2024-11-20 07:25:13.777373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:39.022 [2024-11-20 07:25:13.777380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:39.022 [2024-11-20 07:25:13.777383] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.777387] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ccf550): datao=0, datal=8192, cccid=5 00:24:39.022 [2024-11-20 07:25:13.777392] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d31880) on tqpair(0x1ccf550): expected_datao=0, payload_size=8192 00:24:39.022 [2024-11-20 07:25:13.777396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.780870] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.780877] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.780883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:39.022 [2024-11-20 07:25:13.780888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:39.022 [2024-11-20 07:25:13.780892] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.780896] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ccf550): datao=0, datal=512, cccid=4 00:24:39.022 [2024-11-20 07:25:13.780900] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d31700) on tqpair(0x1ccf550): expected_datao=0, payload_size=512 00:24:39.022 [2024-11-20 07:25:13.780905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.780911] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:39.022 [2024-11-20 07:25:13.780914] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:39.023 [2024-11-20 07:25:13.780920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:39.023 [2024-11-20 07:25:13.780926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:39.023 [2024-11-20 07:25:13.780929] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:39.023 [2024-11-20 07:25:13.780933] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ccf550): datao=0, datal=512, cccid=6 00:24:39.023 [2024-11-20 07:25:13.780937] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d31a00) on tqpair(0x1ccf550): expected_datao=0, payload_size=512 00:24:39.023 [2024-11-20 07:25:13.780942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.023 [2024-11-20 07:25:13.780948] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:39.023 [2024-11-20 07:25:13.780952] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:39.023 [2024-11-20 07:25:13.780957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:39.023 [2024-11-20 07:25:13.780963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:39.023 [2024-11-20 07:25:13.780967] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:39.023 [2024-11-20 07:25:13.780975] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ccf550): datao=0, datal=4096, cccid=7 00:24:39.023 [2024-11-20 07:25:13.780980] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d31b80) on tqpair(0x1ccf550): expected_datao=0, payload_size=4096 00:24:39.023 [2024-11-20 07:25:13.780984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.023 [2024-11-20 07:25:13.780990] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:39.023 [2024-11-20 07:25:13.780994] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:39.023 [2024-11-20 07:25:13.781000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.023 [2024-11-20 07:25:13.781005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.023 [2024-11-20 07:25:13.781009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.023 [2024-11-20 07:25:13.781013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31880) on tqpair=0x1ccf550 00:24:39.023 [2024-11-20 07:25:13.781025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.023 [2024-11-20 07:25:13.781031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.023 [2024-11-20 07:25:13.781034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.023 [2024-11-20 07:25:13.781038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31700) on tqpair=0x1ccf550 00:24:39.023 [2024-11-20 07:25:13.781048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.023 [2024-11-20 07:25:13.781054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.023 [2024-11-20 07:25:13.781057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.023 [2024-11-20 07:25:13.781061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31a00) on tqpair=0x1ccf550 00:24:39.023 [2024-11-20 07:25:13.781068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.023 [2024-11-20 07:25:13.781074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.023 [2024-11-20 07:25:13.781077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.023 [2024-11-20 07:25:13.781081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31b80) on tqpair=0x1ccf550 00:24:39.023 ===================================================== 00:24:39.023 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:39.023 ===================================================== 00:24:39.023 Controller Capabilities/Features 00:24:39.023 ================================ 00:24:39.023 Vendor ID: 8086 00:24:39.023 Subsystem Vendor ID: 8086 00:24:39.023 Serial Number: SPDK00000000000001 00:24:39.023 Model Number: SPDK bdev Controller 00:24:39.023 Firmware Version: 25.01 00:24:39.023 Recommended Arb Burst: 6 00:24:39.023 IEEE OUI Identifier: e4 d2 5c 00:24:39.023 Multi-path I/O 00:24:39.023 May have multiple subsystem ports: Yes 00:24:39.023 May have multiple controllers: Yes 00:24:39.023 Associated with SR-IOV VF: No 00:24:39.023 Max Data Transfer Size: 131072 00:24:39.023 Max Number of Namespaces: 32 00:24:39.023 Max Number of I/O Queues: 127 00:24:39.023 NVMe Specification Version (VS): 1.3 00:24:39.023 NVMe Specification Version (Identify): 1.3 00:24:39.023 Maximum Queue Entries: 128 00:24:39.023 Contiguous Queues Required: Yes 00:24:39.023 Arbitration Mechanisms Supported 00:24:39.023 Weighted Round Robin: Not Supported 00:24:39.023 Vendor Specific: Not Supported 00:24:39.023 Reset Timeout: 15000 ms 00:24:39.023 Doorbell Stride: 4 bytes 00:24:39.023 NVM Subsystem Reset: Not Supported 00:24:39.023 Command Sets Supported 00:24:39.023 NVM Command Set: Supported 00:24:39.023 Boot Partition: Not Supported 00:24:39.023 Memory Page Size Minimum: 4096 bytes 00:24:39.023 Memory Page Size Maximum: 4096 bytes 00:24:39.023 Persistent Memory Region: Not Supported 00:24:39.023 Optional Asynchronous Events Supported 00:24:39.023 Namespace Attribute Notices: Supported 00:24:39.023 Firmware Activation Notices: Not Supported 00:24:39.023 ANA Change Notices: Not Supported 00:24:39.023 PLE Aggregate Log Change Notices: Not Supported 00:24:39.023 LBA Status Info Alert Notices: Not Supported 00:24:39.023 EGE Aggregate Log Change Notices: Not Supported 00:24:39.023 Normal NVM Subsystem Shutdown event: Not Supported 00:24:39.023 Zone Descriptor Change Notices: Not Supported 00:24:39.023 Discovery Log Change Notices: Not Supported 00:24:39.023 Controller Attributes 00:24:39.023 128-bit Host Identifier: Supported 00:24:39.023 Non-Operational Permissive Mode: Not Supported 00:24:39.023 NVM Sets: Not Supported 00:24:39.023 Read Recovery Levels: Not Supported 00:24:39.023 Endurance Groups: Not Supported 00:24:39.023 Predictable Latency Mode: Not Supported 00:24:39.023 Traffic Based Keep ALive: Not Supported 00:24:39.023 Namespace Granularity: Not Supported 00:24:39.023 SQ Associations: Not Supported 00:24:39.023 UUID List: Not Supported 00:24:39.023 Multi-Domain Subsystem: Not Supported 00:24:39.023 Fixed Capacity Management: Not Supported 00:24:39.023 Variable Capacity Management: Not Supported 00:24:39.023 Delete Endurance Group: Not Supported 00:24:39.023 Delete NVM Set: Not Supported 00:24:39.023 Extended LBA Formats Supported: Not Supported 00:24:39.023 Flexible Data Placement Supported: Not Supported 00:24:39.023 00:24:39.023 Controller Memory Buffer Support 00:24:39.023 ================================ 00:24:39.023 Supported: No 00:24:39.023 00:24:39.023 Persistent Memory Region Support 00:24:39.023 ================================ 00:24:39.023 Supported: No 00:24:39.023 00:24:39.023 Admin Command Set Attributes 00:24:39.023 ============================ 00:24:39.023 Security Send/Receive: Not Supported 00:24:39.023 Format NVM: Not Supported 00:24:39.023 Firmware Activate/Download: Not Supported 00:24:39.023 Namespace Management: Not Supported 00:24:39.023 Device Self-Test: Not Supported 00:24:39.023 Directives: Not Supported 00:24:39.023 NVMe-MI: Not Supported 00:24:39.023 Virtualization Management: Not Supported 00:24:39.023 Doorbell Buffer Config: Not Supported 00:24:39.023 Get LBA Status Capability: Not Supported 00:24:39.023 Command & Feature Lockdown Capability: Not Supported 00:24:39.023 Abort Command Limit: 4 00:24:39.023 Async Event Request Limit: 4 00:24:39.023 Number of Firmware Slots: N/A 00:24:39.023 Firmware Slot 1 Read-Only: N/A 00:24:39.023 Firmware Activation Without Reset: N/A 00:24:39.023 Multiple Update Detection Support: N/A 00:24:39.023 Firmware Update Granularity: No Information Provided 00:24:39.023 Per-Namespace SMART Log: No 00:24:39.023 Asymmetric Namespace Access Log Page: Not Supported 00:24:39.023 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:39.023 Command Effects Log Page: Supported 00:24:39.023 Get Log Page Extended Data: Supported 00:24:39.023 Telemetry Log Pages: Not Supported 00:24:39.023 Persistent Event Log Pages: Not Supported 00:24:39.023 Supported Log Pages Log Page: May Support 00:24:39.023 Commands Supported & Effects Log Page: Not Supported 00:24:39.023 Feature Identifiers & Effects Log Page:May Support 00:24:39.023 NVMe-MI Commands & Effects Log Page: May Support 00:24:39.023 Data Area 4 for Telemetry Log: Not Supported 00:24:39.023 Error Log Page Entries Supported: 128 00:24:39.023 Keep Alive: Supported 00:24:39.023 Keep Alive Granularity: 10000 ms 00:24:39.023 00:24:39.023 NVM Command Set Attributes 00:24:39.023 ========================== 00:24:39.023 Submission Queue Entry Size 00:24:39.023 Max: 64 00:24:39.023 Min: 64 00:24:39.023 Completion Queue Entry Size 00:24:39.023 Max: 16 00:24:39.023 Min: 16 00:24:39.023 Number of Namespaces: 32 00:24:39.023 Compare Command: Supported 00:24:39.023 Write Uncorrectable Command: Not Supported 00:24:39.023 Dataset Management Command: Supported 00:24:39.023 Write Zeroes Command: Supported 00:24:39.023 Set Features Save Field: Not Supported 00:24:39.023 Reservations: Supported 00:24:39.023 Timestamp: Not Supported 00:24:39.023 Copy: Supported 00:24:39.023 Volatile Write Cache: Present 00:24:39.023 Atomic Write Unit (Normal): 1 00:24:39.023 Atomic Write Unit (PFail): 1 00:24:39.023 Atomic Compare & Write Unit: 1 00:24:39.023 Fused Compare & Write: Supported 00:24:39.023 Scatter-Gather List 00:24:39.023 SGL Command Set: Supported 00:24:39.023 SGL Keyed: Supported 00:24:39.023 SGL Bit Bucket Descriptor: Not Supported 00:24:39.023 SGL Metadata Pointer: Not Supported 00:24:39.023 Oversized SGL: Not Supported 00:24:39.023 SGL Metadata Address: Not Supported 00:24:39.023 SGL Offset: Supported 00:24:39.024 Transport SGL Data Block: Not Supported 00:24:39.024 Replay Protected Memory Block: Not Supported 00:24:39.024 00:24:39.024 Firmware Slot Information 00:24:39.024 ========================= 00:24:39.024 Active slot: 1 00:24:39.024 Slot 1 Firmware Revision: 25.01 00:24:39.024 00:24:39.024 00:24:39.024 Commands Supported and Effects 00:24:39.024 ============================== 00:24:39.024 Admin Commands 00:24:39.024 -------------- 00:24:39.024 Get Log Page (02h): Supported 00:24:39.024 Identify (06h): Supported 00:24:39.024 Abort (08h): Supported 00:24:39.024 Set Features (09h): Supported 00:24:39.024 Get Features (0Ah): Supported 00:24:39.024 Asynchronous Event Request (0Ch): Supported 00:24:39.024 Keep Alive (18h): Supported 00:24:39.024 I/O Commands 00:24:39.024 ------------ 00:24:39.024 Flush (00h): Supported LBA-Change 00:24:39.024 Write (01h): Supported LBA-Change 00:24:39.024 Read (02h): Supported 00:24:39.024 Compare (05h): Supported 00:24:39.024 Write Zeroes (08h): Supported LBA-Change 00:24:39.024 Dataset Management (09h): Supported LBA-Change 00:24:39.024 Copy (19h): Supported LBA-Change 00:24:39.024 00:24:39.024 Error Log 00:24:39.024 ========= 00:24:39.024 00:24:39.024 Arbitration 00:24:39.024 =========== 00:24:39.024 Arbitration Burst: 1 00:24:39.024 00:24:39.024 Power Management 00:24:39.024 ================ 00:24:39.024 Number of Power States: 1 00:24:39.024 Current Power State: Power State #0 00:24:39.024 Power State #0: 00:24:39.024 Max Power: 0.00 W 00:24:39.024 Non-Operational State: Operational 00:24:39.024 Entry Latency: Not Reported 00:24:39.024 Exit Latency: Not Reported 00:24:39.024 Relative Read Throughput: 0 00:24:39.024 Relative Read Latency: 0 00:24:39.024 Relative Write Throughput: 0 00:24:39.024 Relative Write Latency: 0 00:24:39.024 Idle Power: Not Reported 00:24:39.024 Active Power: Not Reported 00:24:39.024 Non-Operational Permissive Mode: Not Supported 00:24:39.024 00:24:39.024 Health Information 00:24:39.024 ================== 00:24:39.024 Critical Warnings: 00:24:39.024 Available Spare Space: OK 00:24:39.024 Temperature: OK 00:24:39.024 Device Reliability: OK 00:24:39.024 Read Only: No 00:24:39.024 Volatile Memory Backup: OK 00:24:39.024 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:39.024 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:39.024 Available Spare: 0% 00:24:39.024 Available Spare Threshold: 0% 00:24:39.024 Life Percentage Used:[2024-11-20 07:25:13.781177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.024 [2024-11-20 07:25:13.781182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ccf550) 00:24:39.024 [2024-11-20 07:25:13.781189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.024 [2024-11-20 07:25:13.781202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31b80, cid 7, qid 0 00:24:39.024 [2024-11-20 07:25:13.781421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.024 [2024-11-20 07:25:13.781428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.024 [2024-11-20 07:25:13.781432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.024 [2024-11-20 07:25:13.781437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31b80) on tqpair=0x1ccf550 00:24:39.024 [2024-11-20 07:25:13.781467] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:39.024 [2024-11-20 07:25:13.781478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31100) on tqpair=0x1ccf550 00:24:39.024 [2024-11-20 07:25:13.781486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.024 [2024-11-20 07:25:13.781492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31280) on tqpair=0x1ccf550 00:24:39.024 [2024-11-20 07:25:13.781498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.024 [2024-11-20 07:25:13.781504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31400) on tqpair=0x1ccf550 00:24:39.024 [2024-11-20 07:25:13.781511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.024 [2024-11-20 07:25:13.781519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.024 [2024-11-20 07:25:13.781525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.024 [2024-11-20 07:25:13.781534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.024 [2024-11-20 07:25:13.781538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.024 [2024-11-20 07:25:13.781543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.024 [2024-11-20 07:25:13.781551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.024 [2024-11-20 07:25:13.781564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.286 [2024-11-20 07:25:13.781733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.286 [2024-11-20 07:25:13.781741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.286 [2024-11-20 07:25:13.781746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.286 [2024-11-20 07:25:13.781750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.286 [2024-11-20 07:25:13.781757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.286 [2024-11-20 07:25:13.781761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.286 [2024-11-20 07:25:13.781765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.286 [2024-11-20 07:25:13.781774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.286 [2024-11-20 07:25:13.781787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.286 [2024-11-20 07:25:13.781980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.286 [2024-11-20 07:25:13.781987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.286 [2024-11-20 07:25:13.781991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.286 [2024-11-20 07:25:13.781994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.286 [2024-11-20 07:25:13.781999] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:39.286 [2024-11-20 07:25:13.782004] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:39.286 [2024-11-20 07:25:13.782013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.286 [2024-11-20 07:25:13.782017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.286 [2024-11-20 07:25:13.782020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.286 [2024-11-20 07:25:13.782027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.286 [2024-11-20 07:25:13.782037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.286 [2024-11-20 07:25:13.782236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.286 [2024-11-20 07:25:13.782244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.286 [2024-11-20 07:25:13.782248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.286 [2024-11-20 07:25:13.782253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.286 [2024-11-20 07:25:13.782263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.286 [2024-11-20 07:25:13.782267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.286 [2024-11-20 07:25:13.782270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.286 [2024-11-20 07:25:13.782277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.286 [2024-11-20 07:25:13.782288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.286 [2024-11-20 07:25:13.782489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.286 [2024-11-20 07:25:13.782495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.286 [2024-11-20 07:25:13.782499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.286 [2024-11-20 07:25:13.782503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.286 [2024-11-20 07:25:13.782512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.286 [2024-11-20 07:25:13.782517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.286 [2024-11-20 07:25:13.782521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.286 [2024-11-20 07:25:13.782528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.286 [2024-11-20 07:25:13.782538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.286 [2024-11-20 07:25:13.782689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.286 [2024-11-20 07:25:13.782695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.286 [2024-11-20 07:25:13.782699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.782703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.287 [2024-11-20 07:25:13.782712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.782716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.782719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.287 [2024-11-20 07:25:13.782726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.287 [2024-11-20 07:25:13.782736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.287 [2024-11-20 07:25:13.782912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.287 [2024-11-20 07:25:13.782919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.287 [2024-11-20 07:25:13.782922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.782926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.287 [2024-11-20 07:25:13.782936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.782939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.782943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.287 [2024-11-20 07:25:13.782950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.287 [2024-11-20 07:25:13.782960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.287 [2024-11-20 07:25:13.783143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.287 [2024-11-20 07:25:13.783149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.287 [2024-11-20 07:25:13.783153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.783157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.287 [2024-11-20 07:25:13.783166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.783170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.783174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.287 [2024-11-20 07:25:13.783180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.287 [2024-11-20 07:25:13.783190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.287 [2024-11-20 07:25:13.783345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.287 [2024-11-20 07:25:13.783353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.287 [2024-11-20 07:25:13.783356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.783360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.287 [2024-11-20 07:25:13.783370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.783374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.783377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.287 [2024-11-20 07:25:13.783384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.287 [2024-11-20 07:25:13.783394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.287 [2024-11-20 07:25:13.783595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.287 [2024-11-20 07:25:13.783602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.287 [2024-11-20 07:25:13.783605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.783609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.287 [2024-11-20 07:25:13.783618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.783622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.783626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.287 [2024-11-20 07:25:13.783633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.287 [2024-11-20 07:25:13.783642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.287 [2024-11-20 07:25:13.783819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.287 [2024-11-20 07:25:13.783825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.287 [2024-11-20 07:25:13.783829] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.783833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.287 [2024-11-20 07:25:13.783842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.783846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.783850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.287 [2024-11-20 07:25:13.783856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.287 [2024-11-20 07:25:13.783871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.287 [2024-11-20 07:25:13.784102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.287 [2024-11-20 07:25:13.784108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.287 [2024-11-20 07:25:13.784111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.784115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.287 [2024-11-20 07:25:13.784125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.784129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.784133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.287 [2024-11-20 07:25:13.784139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.287 [2024-11-20 07:25:13.784149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.287 [2024-11-20 07:25:13.784351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.287 [2024-11-20 07:25:13.784357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.287 [2024-11-20 07:25:13.784362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.784366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.287 [2024-11-20 07:25:13.784376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.784380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.784384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.287 [2024-11-20 07:25:13.784390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.287 [2024-11-20 07:25:13.784400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.287 [2024-11-20 07:25:13.784605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.287 [2024-11-20 07:25:13.784611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.287 [2024-11-20 07:25:13.784615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.784619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.287 [2024-11-20 07:25:13.784628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.784632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.784636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.287 [2024-11-20 07:25:13.784643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.287 [2024-11-20 07:25:13.784652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.287 [2024-11-20 07:25:13.784804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.287 [2024-11-20 07:25:13.784810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.287 [2024-11-20 07:25:13.784814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.784818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.287 [2024-11-20 07:25:13.784827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.784831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.784835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ccf550) 00:24:39.287 [2024-11-20 07:25:13.784841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.287 [2024-11-20 07:25:13.784851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d31580, cid 3, qid 0 00:24:39.287 [2024-11-20 07:25:13.788871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.287 [2024-11-20 07:25:13.788879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.287 [2024-11-20 07:25:13.788883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.287 [2024-11-20 07:25:13.788887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d31580) on tqpair=0x1ccf550 00:24:39.287 [2024-11-20 07:25:13.788894] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:24:39.287 0% 00:24:39.287 Data Units Read: 0 00:24:39.287 Data Units Written: 0 00:24:39.287 Host Read Commands: 0 00:24:39.288 Host Write Commands: 0 00:24:39.288 Controller Busy Time: 0 minutes 00:24:39.288 Power Cycles: 0 00:24:39.288 Power On Hours: 0 hours 00:24:39.288 Unsafe Shutdowns: 0 00:24:39.288 Unrecoverable Media Errors: 0 00:24:39.288 Lifetime Error Log Entries: 0 00:24:39.288 Warning Temperature Time: 0 minutes 00:24:39.288 Critical Temperature Time: 0 minutes 00:24:39.288 00:24:39.288 Number of Queues 00:24:39.288 ================ 00:24:39.288 Number of I/O Submission Queues: 127 00:24:39.288 Number of I/O Completion Queues: 127 00:24:39.288 00:24:39.288 Active Namespaces 00:24:39.288 ================= 00:24:39.288 Namespace ID:1 00:24:39.288 Error Recovery Timeout: Unlimited 00:24:39.288 Command Set Identifier: NVM (00h) 00:24:39.288 Deallocate: Supported 00:24:39.288 Deallocated/Unwritten Error: Not Supported 00:24:39.288 Deallocated Read Value: Unknown 00:24:39.288 Deallocate in Write Zeroes: Not Supported 00:24:39.288 Deallocated Guard Field: 0xFFFF 00:24:39.288 Flush: Supported 00:24:39.288 Reservation: Supported 00:24:39.288 Namespace Sharing Capabilities: Multiple Controllers 00:24:39.288 Size (in LBAs): 131072 (0GiB) 00:24:39.288 Capacity (in LBAs): 131072 (0GiB) 00:24:39.288 Utilization (in LBAs): 131072 (0GiB) 00:24:39.288 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:39.288 EUI64: ABCDEF0123456789 00:24:39.288 UUID: bbdcba04-513e-45b7-85d4-fb8f2f8f2440 00:24:39.288 Thin Provisioning: Not Supported 00:24:39.288 Per-NS Atomic Units: Yes 00:24:39.288 Atomic Boundary Size (Normal): 0 00:24:39.288 Atomic Boundary Size (PFail): 0 00:24:39.288 Atomic Boundary Offset: 0 00:24:39.288 Maximum Single Source Range Length: 65535 00:24:39.288 Maximum Copy Length: 65535 00:24:39.288 Maximum Source Range Count: 1 00:24:39.288 NGUID/EUI64 Never Reused: No 00:24:39.288 Namespace Write Protected: No 00:24:39.288 Number of LBA Formats: 1 00:24:39.288 Current LBA Format: LBA Format #00 00:24:39.288 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:39.288 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:39.288 rmmod nvme_tcp 00:24:39.288 rmmod nvme_fabrics 00:24:39.288 rmmod nvme_keyring 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1384905 ']' 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1384905 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 1384905 ']' 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 1384905 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1384905 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1384905' 00:24:39.288 killing process with pid 1384905 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 1384905 00:24:39.288 07:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 1384905 00:24:39.549 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:39.549 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:39.549 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:39.549 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:39.549 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:39.549 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:39.549 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:39.549 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:39.549 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:39.549 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.549 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.549 07:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.465 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:41.465 00:24:41.465 real 0m12.598s 00:24:41.465 user 0m8.987s 00:24:41.465 sys 0m6.793s 00:24:41.465 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:41.465 07:25:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:41.465 ************************************ 00:24:41.465 END TEST nvmf_identify 00:24:41.465 ************************************ 00:24:41.465 07:25:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:41.465 07:25:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:41.465 07:25:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:41.465 07:25:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.728 ************************************ 00:24:41.728 START TEST nvmf_perf 00:24:41.728 ************************************ 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:41.728 * Looking for test storage... 00:24:41.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:41.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.728 --rc genhtml_branch_coverage=1 00:24:41.728 --rc genhtml_function_coverage=1 00:24:41.728 --rc genhtml_legend=1 00:24:41.728 --rc geninfo_all_blocks=1 00:24:41.728 --rc geninfo_unexecuted_blocks=1 00:24:41.728 00:24:41.728 ' 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:41.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.728 --rc genhtml_branch_coverage=1 00:24:41.728 --rc genhtml_function_coverage=1 00:24:41.728 --rc genhtml_legend=1 00:24:41.728 --rc geninfo_all_blocks=1 00:24:41.728 --rc geninfo_unexecuted_blocks=1 00:24:41.728 00:24:41.728 ' 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:41.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.728 --rc genhtml_branch_coverage=1 00:24:41.728 --rc genhtml_function_coverage=1 00:24:41.728 --rc genhtml_legend=1 00:24:41.728 --rc geninfo_all_blocks=1 00:24:41.728 --rc geninfo_unexecuted_blocks=1 00:24:41.728 00:24:41.728 ' 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:41.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.728 --rc genhtml_branch_coverage=1 00:24:41.728 --rc genhtml_function_coverage=1 00:24:41.728 --rc genhtml_legend=1 00:24:41.728 --rc geninfo_all_blocks=1 00:24:41.728 --rc geninfo_unexecuted_blocks=1 00:24:41.728 00:24:41.728 ' 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.728 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:41.729 07:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:50.004 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:50.004 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:50.004 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:50.005 Found net devices under 0000:31:00.0: cvl_0_0 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:50.005 Found net devices under 0000:31:00.1: cvl_0_1 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.005 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:50.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:24:50.266 00:24:50.266 --- 10.0.0.2 ping statistics --- 00:24:50.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.266 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:24:50.266 00:24:50.266 --- 10.0.0.1 ping statistics --- 00:24:50.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.266 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1389946 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1389946 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 1389946 ']' 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:50.266 07:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:50.527 [2024-11-20 07:25:25.031484] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:24:50.527 [2024-11-20 07:25:25.031551] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.527 [2024-11-20 07:25:25.122843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:50.527 [2024-11-20 07:25:25.164292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.527 [2024-11-20 07:25:25.164328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.527 [2024-11-20 07:25:25.164336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.527 [2024-11-20 07:25:25.164343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.527 [2024-11-20 07:25:25.164349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.527 [2024-11-20 07:25:25.166136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.527 [2024-11-20 07:25:25.166251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.527 [2024-11-20 07:25:25.166406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.527 [2024-11-20 07:25:25.166407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.099 07:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:51.099 07:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:24:51.099 07:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:51.099 07:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:51.099 07:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:51.360 07:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.360 07:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:51.360 07:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:51.621 07:25:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:51.621 07:25:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:51.882 07:25:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:51.882 07:25:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:52.143 07:25:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:52.143 07:25:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:52.143 07:25:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:52.143 07:25:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:52.143 07:25:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:52.405 [2024-11-20 07:25:26.909938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.405 07:25:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:52.405 07:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:52.405 07:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:52.667 07:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:52.667 07:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:52.927 07:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.927 [2024-11-20 07:25:27.648660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.927 07:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:53.188 07:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:53.188 07:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:53.188 07:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:53.188 07:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:54.572 Initializing NVMe Controllers 00:24:54.572 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:54.572 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:54.572 Initialization complete. Launching workers. 00:24:54.572 ======================================================== 00:24:54.572 Latency(us) 00:24:54.572 Device Information : IOPS MiB/s Average min max 00:24:54.572 PCIE (0000:65:00.0) NSID 1 from core 0: 79446.61 310.34 402.01 13.17 5220.67 00:24:54.572 ======================================================== 00:24:54.572 Total : 79446.61 310.34 402.01 13.17 5220.67 00:24:54.572 00:24:54.572 07:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.955 Initializing NVMe Controllers 00:24:55.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:55.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:55.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:55.955 Initialization complete. Launching workers. 00:24:55.955 ======================================================== 00:24:55.955 Latency(us) 00:24:55.955 Device Information : IOPS MiB/s Average min max 00:24:55.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.00 0.28 14374.26 244.82 46253.17 00:24:55.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 42.00 0.16 24109.28 7389.38 47892.44 00:24:55.955 ======================================================== 00:24:55.955 Total : 113.00 0.44 17992.59 244.82 47892.44 00:24:55.955 00:24:55.955 07:25:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:57.342 Initializing NVMe Controllers 00:24:57.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:57.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:57.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:57.342 Initialization complete. Launching workers. 00:24:57.342 ======================================================== 00:24:57.342 Latency(us) 00:24:57.342 Device Information : IOPS MiB/s Average min max 00:24:57.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10368.44 40.50 3130.20 487.37 45511.15 00:24:57.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3919.03 15.31 8207.54 6976.52 16021.16 00:24:57.342 ======================================================== 00:24:57.342 Total : 14287.47 55.81 4522.91 487.37 45511.15 00:24:57.342 00:24:57.342 07:25:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:57.342 07:25:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:57.342 07:25:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.888 Initializing NVMe Controllers 00:24:59.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:59.888 Controller IO queue size 128, less than required. 00:24:59.888 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:59.888 Controller IO queue size 128, less than required. 00:24:59.888 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:59.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:59.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:59.888 Initialization complete. Launching workers. 00:24:59.888 ======================================================== 00:24:59.888 Latency(us) 00:24:59.888 Device Information : IOPS MiB/s Average min max 00:24:59.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1544.82 386.20 84103.26 55755.46 143553.05 00:24:59.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 624.52 156.13 213818.17 63460.71 315291.68 00:24:59.888 ======================================================== 00:24:59.888 Total : 2169.34 542.34 121446.34 55755.46 315291.68 00:24:59.888 00:24:59.888 07:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:00.149 No valid NVMe controllers or AIO or URING devices found 00:25:00.149 Initializing NVMe Controllers 00:25:00.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:00.149 Controller IO queue size 128, less than required. 00:25:00.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:00.149 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:00.149 Controller IO queue size 128, less than required. 00:25:00.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:00.149 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:00.149 WARNING: Some requested NVMe devices were skipped 00:25:00.149 07:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:02.697 Initializing NVMe Controllers 00:25:02.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:02.697 Controller IO queue size 128, less than required. 00:25:02.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:02.697 Controller IO queue size 128, less than required. 00:25:02.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:02.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:02.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:02.697 Initialization complete. Launching workers. 00:25:02.697 00:25:02.697 ==================== 00:25:02.697 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:02.697 TCP transport: 00:25:02.697 polls: 21707 00:25:02.697 idle_polls: 12641 00:25:02.697 sock_completions: 9066 00:25:02.697 nvme_completions: 6359 00:25:02.697 submitted_requests: 9490 00:25:02.697 queued_requests: 1 00:25:02.697 00:25:02.697 ==================== 00:25:02.697 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:02.697 TCP transport: 00:25:02.697 polls: 22188 00:25:02.697 idle_polls: 12636 00:25:02.697 sock_completions: 9552 00:25:02.697 nvme_completions: 7287 00:25:02.697 submitted_requests: 11072 00:25:02.697 queued_requests: 1 00:25:02.697 ======================================================== 00:25:02.697 Latency(us) 00:25:02.697 Device Information : IOPS MiB/s Average min max 00:25:02.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1589.46 397.36 82255.03 43307.58 127273.10 00:25:02.698 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1821.45 455.36 71356.68 34145.79 113283.01 00:25:02.698 ======================================================== 00:25:02.698 Total : 3410.91 852.73 76435.23 34145.79 127273.10 00:25:02.698 00:25:02.698 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:02.698 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:02.698 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:02.698 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:02.698 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:02.698 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:02.698 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:02.698 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:02.698 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:02.698 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:02.698 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:02.698 rmmod nvme_tcp 00:25:02.698 rmmod nvme_fabrics 00:25:02.698 rmmod nvme_keyring 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1389946 ']' 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1389946 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 1389946 ']' 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 1389946 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1389946 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1389946' 00:25:02.958 killing process with pid 1389946 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 1389946 00:25:02.958 07:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 1389946 00:25:04.874 07:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:04.874 07:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:04.874 07:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:04.874 07:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:04.874 07:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:04.874 07:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:04.874 07:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:04.874 07:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:04.874 07:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:04.874 07:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.874 07:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.874 07:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:07.421 00:25:07.421 real 0m25.359s 00:25:07.421 user 0m59.175s 00:25:07.421 sys 0m9.194s 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:07.421 ************************************ 00:25:07.421 END TEST nvmf_perf 00:25:07.421 ************************************ 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.421 ************************************ 00:25:07.421 START TEST nvmf_fio_host 00:25:07.421 ************************************ 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:07.421 * Looking for test storage... 00:25:07.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.421 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:07.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.422 --rc genhtml_branch_coverage=1 00:25:07.422 --rc genhtml_function_coverage=1 00:25:07.422 --rc genhtml_legend=1 00:25:07.422 --rc geninfo_all_blocks=1 00:25:07.422 --rc geninfo_unexecuted_blocks=1 00:25:07.422 00:25:07.422 ' 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:07.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.422 --rc genhtml_branch_coverage=1 00:25:07.422 --rc genhtml_function_coverage=1 00:25:07.422 --rc genhtml_legend=1 00:25:07.422 --rc geninfo_all_blocks=1 00:25:07.422 --rc geninfo_unexecuted_blocks=1 00:25:07.422 00:25:07.422 ' 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:07.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.422 --rc genhtml_branch_coverage=1 00:25:07.422 --rc genhtml_function_coverage=1 00:25:07.422 --rc genhtml_legend=1 00:25:07.422 --rc geninfo_all_blocks=1 00:25:07.422 --rc geninfo_unexecuted_blocks=1 00:25:07.422 00:25:07.422 ' 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:07.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.422 --rc genhtml_branch_coverage=1 00:25:07.422 --rc genhtml_function_coverage=1 00:25:07.422 --rc genhtml_legend=1 00:25:07.422 --rc geninfo_all_blocks=1 00:25:07.422 --rc geninfo_unexecuted_blocks=1 00:25:07.422 00:25:07.422 ' 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.422 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:07.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:07.423 07:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:15.569 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:15.569 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:15.569 Found net devices under 0000:31:00.0: cvl_0_0 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:15.569 Found net devices under 0000:31:00.1: cvl_0_1 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.569 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:15.570 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:15.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:25:15.831 00:25:15.831 --- 10.0.0.2 ping statistics --- 00:25:15.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.831 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:25:15.831 00:25:15.831 --- 10.0.0.1 ping statistics --- 00:25:15.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.831 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1397542 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1397542 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 1397542 ']' 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:15.831 07:25:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.831 [2024-11-20 07:25:50.587733] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:25:15.831 [2024-11-20 07:25:50.587803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.092 [2024-11-20 07:25:50.686816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:16.092 [2024-11-20 07:25:50.729068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.092 [2024-11-20 07:25:50.729109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.092 [2024-11-20 07:25:50.729117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.092 [2024-11-20 07:25:50.729124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.092 [2024-11-20 07:25:50.729134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.092 [2024-11-20 07:25:50.730944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.092 [2024-11-20 07:25:50.731102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.092 [2024-11-20 07:25:50.731254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.092 [2024-11-20 07:25:50.731255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.662 07:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:16.662 07:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:25:16.662 07:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:16.922 [2024-11-20 07:25:51.540054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.922 07:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:16.922 07:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:16.922 07:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.922 07:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:17.182 Malloc1 00:25:17.182 07:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:17.442 07:25:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:17.442 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.702 [2024-11-20 07:25:52.337722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.702 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:17.963 07:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:18.224 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:18.224 fio-3.35 00:25:18.224 Starting 1 thread 00:25:20.768 00:25:20.768 test: (groupid=0, jobs=1): err= 0: pid=1398227: Wed Nov 20 07:25:55 2024 00:25:20.768 read: IOPS=9587, BW=37.4MiB/s (39.3MB/s)(75.1MiB/2006msec) 00:25:20.768 slat (usec): min=2, max=278, avg= 2.16, stdev= 2.84 00:25:20.768 clat (usec): min=3678, max=13181, avg=7348.56, stdev=546.07 00:25:20.768 lat (usec): min=3713, max=13183, avg=7350.72, stdev=545.86 00:25:20.768 clat percentiles (usec): 00:25:20.768 | 1.00th=[ 6128], 5.00th=[ 6456], 10.00th=[ 6718], 20.00th=[ 6915], 00:25:20.768 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:25:20.768 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 7963], 95.00th=[ 8160], 00:25:20.768 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[10814], 99.95th=[12256], 00:25:20.768 | 99.99th=[13173] 00:25:20.768 bw ( KiB/s): min=37072, max=39072, per=99.89%, avg=38308.00, stdev=877.37, samples=4 00:25:20.768 iops : min= 9268, max= 9768, avg=9577.00, stdev=219.34, samples=4 00:25:20.768 write: IOPS=9592, BW=37.5MiB/s (39.3MB/s)(75.2MiB/2006msec); 0 zone resets 00:25:20.768 slat (usec): min=2, max=270, avg= 2.23, stdev= 2.14 00:25:20.768 clat (usec): min=2878, max=10854, avg=5909.73, stdev=437.80 00:25:20.768 lat (usec): min=2896, max=10856, avg=5911.96, stdev=437.66 00:25:20.768 clat percentiles (usec): 00:25:20.768 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:25:20.768 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:25:20.768 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6390], 95.00th=[ 6587], 00:25:20.768 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8455], 99.95th=[ 9896], 00:25:20.768 | 99.99th=[10814] 00:25:20.768 bw ( KiB/s): min=37960, max=38832, per=100.00%, avg=38386.00, stdev=366.16, samples=4 00:25:20.768 iops : min= 9490, max= 9708, avg=9596.50, stdev=91.54, samples=4 00:25:20.768 lat (msec) : 4=0.06%, 10=99.84%, 20=0.11% 00:25:20.768 cpu : usr=72.92%, sys=25.84%, ctx=42, majf=0, minf=17 00:25:20.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:20.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:20.769 issued rwts: total=19232,19242,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.769 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:20.769 00:25:20.769 Run status group 0 (all jobs): 00:25:20.769 READ: bw=37.4MiB/s (39.3MB/s), 37.4MiB/s-37.4MiB/s (39.3MB/s-39.3MB/s), io=75.1MiB (78.8MB), run=2006-2006msec 00:25:20.769 WRITE: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=75.2MiB (78.8MB), run=2006-2006msec 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:20.769 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:21.051 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:21.052 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:21.052 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:21.052 07:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:21.312 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:21.312 fio-3.35 00:25:21.312 Starting 1 thread 00:25:22.253 [2024-11-20 07:25:56.802162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2073c10 is same with the state(6) to be set 00:25:23.638 00:25:23.638 test: (groupid=0, jobs=1): err= 0: pid=1399051: Wed Nov 20 07:25:58 2024 00:25:23.638 read: IOPS=9390, BW=147MiB/s (154MB/s)(294MiB/2006msec) 00:25:23.638 slat (usec): min=3, max=110, avg= 3.59, stdev= 1.61 00:25:23.638 clat (usec): min=2092, max=17553, avg=8280.18, stdev=2068.94 00:25:23.638 lat (usec): min=2095, max=17557, avg=8283.77, stdev=2069.07 00:25:23.638 clat percentiles (usec): 00:25:23.638 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6390], 00:25:23.638 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8160], 60.00th=[ 8717], 00:25:23.638 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[10945], 95.00th=[11469], 00:25:23.638 | 99.00th=[13173], 99.50th=[13566], 99.90th=[14484], 99.95th=[14615], 00:25:23.638 | 99.99th=[15139] 00:25:23.638 bw ( KiB/s): min=66432, max=82688, per=49.05%, avg=73704.00, stdev=6709.80, samples=4 00:25:23.638 iops : min= 4152, max= 5168, avg=4606.50, stdev=419.36, samples=4 00:25:23.638 write: IOPS=5529, BW=86.4MiB/s (90.6MB/s)(151MiB/1751msec); 0 zone resets 00:25:23.638 slat (usec): min=39, max=455, avg=40.95, stdev= 8.41 00:25:23.638 clat (usec): min=2287, max=16008, avg=9481.63, stdev=1628.92 00:25:23.638 lat (usec): min=2326, max=16145, avg=9522.58, stdev=1630.43 00:25:23.638 clat percentiles (usec): 00:25:23.638 | 1.00th=[ 6259], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8160], 00:25:23.638 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:25:23.638 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11469], 95.00th=[12387], 00:25:23.638 | 99.00th=[14484], 99.50th=[14877], 99.90th=[15533], 99.95th=[15795], 00:25:23.638 | 99.99th=[16057] 00:25:23.638 bw ( KiB/s): min=69728, max=86112, per=87.03%, avg=76992.00, stdev=6777.25, samples=4 00:25:23.638 iops : min= 4358, max= 5382, avg=4812.00, stdev=423.58, samples=4 00:25:23.638 lat (msec) : 4=0.60%, 10=72.09%, 20=27.31% 00:25:23.638 cpu : usr=84.99%, sys=13.67%, ctx=15, majf=0, minf=39 00:25:23.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:23.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:23.638 issued rwts: total=18838,9682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:23.638 00:25:23.638 Run status group 0 (all jobs): 00:25:23.638 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=294MiB (309MB), run=2006-2006msec 00:25:23.638 WRITE: bw=86.4MiB/s (90.6MB/s), 86.4MiB/s-86.4MiB/s (90.6MB/s-90.6MB/s), io=151MiB (159MB), run=1751-1751msec 00:25:23.638 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:23.899 rmmod nvme_tcp 00:25:23.899 rmmod nvme_fabrics 00:25:23.899 rmmod nvme_keyring 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1397542 ']' 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1397542 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 1397542 ']' 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 1397542 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1397542 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1397542' 00:25:23.899 killing process with pid 1397542 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 1397542 00:25:23.899 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 1397542 00:25:24.158 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:24.158 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:24.158 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:24.158 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:24.158 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:24.158 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:24.158 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:24.158 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:24.158 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:24.158 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.158 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.158 07:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.070 07:26:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:26.332 00:25:26.332 real 0m19.149s 00:25:26.332 user 1m6.692s 00:25:26.332 sys 0m8.593s 00:25:26.332 07:26:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:26.332 07:26:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.332 ************************************ 00:25:26.332 END TEST nvmf_fio_host 00:25:26.332 ************************************ 00:25:26.332 07:26:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:26.332 07:26:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:26.332 07:26:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:26.332 07:26:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.332 ************************************ 00:25:26.332 START TEST nvmf_failover 00:25:26.332 ************************************ 00:25:26.332 07:26:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:26.332 * Looking for test storage... 00:25:26.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:26.332 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:26.332 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:25:26.332 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:26.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.594 --rc genhtml_branch_coverage=1 00:25:26.594 --rc genhtml_function_coverage=1 00:25:26.594 --rc genhtml_legend=1 00:25:26.594 --rc geninfo_all_blocks=1 00:25:26.594 --rc geninfo_unexecuted_blocks=1 00:25:26.594 00:25:26.594 ' 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:26.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.594 --rc genhtml_branch_coverage=1 00:25:26.594 --rc genhtml_function_coverage=1 00:25:26.594 --rc genhtml_legend=1 00:25:26.594 --rc geninfo_all_blocks=1 00:25:26.594 --rc geninfo_unexecuted_blocks=1 00:25:26.594 00:25:26.594 ' 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:26.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.594 --rc genhtml_branch_coverage=1 00:25:26.594 --rc genhtml_function_coverage=1 00:25:26.594 --rc genhtml_legend=1 00:25:26.594 --rc geninfo_all_blocks=1 00:25:26.594 --rc geninfo_unexecuted_blocks=1 00:25:26.594 00:25:26.594 ' 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:26.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:26.594 --rc genhtml_branch_coverage=1 00:25:26.594 --rc genhtml_function_coverage=1 00:25:26.594 --rc genhtml_legend=1 00:25:26.594 --rc geninfo_all_blocks=1 00:25:26.594 --rc geninfo_unexecuted_blocks=1 00:25:26.594 00:25:26.594 ' 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.594 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:26.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:26.595 07:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:34.742 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:34.742 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:34.742 Found net devices under 0000:31:00.0: cvl_0_0 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:34.742 Found net devices under 0000:31:00.1: cvl_0_1 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.742 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.743 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:34.743 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:34.743 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.743 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.743 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.743 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.743 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:34.743 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.743 07:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:34.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:25:34.743 00:25:34.743 --- 10.0.0.2 ping statistics --- 00:25:34.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.743 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:25:34.743 00:25:34.743 --- 10.0.0.1 ping statistics --- 00:25:34.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.743 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1404063 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1404063 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1404063 ']' 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:34.743 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:34.743 [2024-11-20 07:26:09.140439] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:25:34.743 [2024-11-20 07:26:09.140504] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.743 [2024-11-20 07:26:09.247347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:34.743 [2024-11-20 07:26:09.298012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.743 [2024-11-20 07:26:09.298062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.743 [2024-11-20 07:26:09.298071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.743 [2024-11-20 07:26:09.298079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.743 [2024-11-20 07:26:09.298085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.743 [2024-11-20 07:26:09.299911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.743 [2024-11-20 07:26:09.300106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.743 [2024-11-20 07:26:09.300107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.314 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:35.314 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:35.314 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:35.314 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:35.314 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:35.314 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.314 07:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:35.574 [2024-11-20 07:26:10.149021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.574 07:26:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:35.835 Malloc0 00:25:35.835 07:26:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:35.835 07:26:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:36.096 07:26:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.357 [2024-11-20 07:26:10.890325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.357 07:26:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:36.357 [2024-11-20 07:26:11.066787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:36.357 07:26:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:36.618 [2024-11-20 07:26:11.243337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:36.618 07:26:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:36.618 07:26:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1404431 00:25:36.618 07:26:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:36.618 07:26:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1404431 /var/tmp/bdevperf.sock 00:25:36.618 07:26:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1404431 ']' 00:25:36.618 07:26:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:36.618 07:26:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:36.618 07:26:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:36.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:36.618 07:26:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:36.618 07:26:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:37.562 07:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:37.562 07:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:37.562 07:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:37.823 NVMe0n1 00:25:37.823 07:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:38.083 00:25:38.083 07:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1404769 00:25:38.083 07:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:38.083 07:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:39.026 07:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.286 [2024-11-20 07:26:13.815010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 [2024-11-20 07:26:13.815048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 [2024-11-20 07:26:13.815054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 [2024-11-20 07:26:13.815059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 [2024-11-20 07:26:13.815064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 [2024-11-20 07:26:13.815068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 [2024-11-20 07:26:13.815073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 [2024-11-20 07:26:13.815078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 [2024-11-20 07:26:13.815082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 [2024-11-20 07:26:13.815087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 [2024-11-20 07:26:13.815091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 [2024-11-20 07:26:13.815096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 [2024-11-20 07:26:13.815100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 [2024-11-20 07:26:13.815105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026370 is same with the state(6) to be set 00:25:39.286 07:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:42.587 07:26:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:42.587 00:25:42.587 07:26:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:42.587 [2024-11-20 07:26:17.323731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027170 is same with the state(6) to be set 00:25:42.588 [2024-11-20 07:26:17.323768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027170 is same with the state(6) to be set 00:25:42.588 [2024-11-20 07:26:17.323775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027170 is same with the state(6) to be set 00:25:42.588 [2024-11-20 07:26:17.323780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027170 is same with the state(6) to be set 00:25:42.588 [2024-11-20 07:26:17.323799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027170 is same with the state(6) to be set 00:25:42.588 [2024-11-20 07:26:17.323804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027170 is same with the state(6) to be set 00:25:42.588 [2024-11-20 07:26:17.323808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027170 is same with the state(6) to be set 00:25:42.588 [2024-11-20 07:26:17.323813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1027170 is same with the state(6) to be set 00:25:42.848 07:26:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:46.149 07:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.149 [2024-11-20 07:26:20.512596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.149 07:26:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:47.129 07:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:47.129 [2024-11-20 07:26:21.701972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.129 [2024-11-20 07:26:21.702121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 [2024-11-20 07:26:21.702328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172400 is same with the state(6) to be set 00:25:47.130 07:26:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1404769 00:25:53.900 { 00:25:53.900 "results": [ 00:25:53.900 { 00:25:53.900 "job": "NVMe0n1", 00:25:53.900 "core_mask": "0x1", 00:25:53.900 "workload": "verify", 00:25:53.900 "status": "finished", 00:25:53.900 "verify_range": { 00:25:53.900 "start": 0, 00:25:53.900 "length": 16384 00:25:53.900 }, 00:25:53.900 "queue_depth": 128, 00:25:53.900 "io_size": 4096, 00:25:53.900 "runtime": 15.00719, 00:25:53.900 "iops": 11129.132102678783, 00:25:53.900 "mibps": 43.473172276088995, 00:25:53.900 "io_failed": 7189, 00:25:53.900 "io_timeout": 0, 00:25:53.900 "avg_latency_us": 10999.291097665982, 00:25:53.900 "min_latency_us": 768.0, 00:25:53.900 "max_latency_us": 30583.466666666667 00:25:53.900 } 00:25:53.900 ], 00:25:53.900 "core_count": 1 00:25:53.900 } 00:25:53.900 07:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1404431 00:25:53.900 07:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1404431 ']' 00:25:53.901 07:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1404431 00:25:53.901 07:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:53.901 07:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:53.901 07:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1404431 00:25:53.901 07:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:53.901 07:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:53.901 07:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1404431' 00:25:53.901 killing process with pid 1404431 00:25:53.901 07:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1404431 00:25:53.901 07:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1404431 00:25:53.901 07:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:53.901 [2024-11-20 07:26:11.312960] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:25:53.901 [2024-11-20 07:26:11.313019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404431 ] 00:25:53.901 [2024-11-20 07:26:11.391279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.901 [2024-11-20 07:26:11.427534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.901 Running I/O for 15 seconds... 00:25:53.901 11263.00 IOPS, 44.00 MiB/s [2024-11-20T06:26:28.668Z] [2024-11-20 07:26:13.815503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.901 [2024-11-20 07:26:13.815536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.901 [2024-11-20 07:26:13.815555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.901 [2024-11-20 07:26:13.815572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.901 [2024-11-20 07:26:13.815588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22efd80 is same with the state(6) to be set 00:25:53.901 [2024-11-20 07:26:13.815644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.815983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.815994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.816002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.816011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.816018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.816028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.816035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.816044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.901 [2024-11-20 07:26:13.816052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.901 [2024-11-20 07:26:13.816061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.902 [2024-11-20 07:26:13.816068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.902 [2024-11-20 07:26:13.816085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.902 [2024-11-20 07:26:13.816101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.902 [2024-11-20 07:26:13.816238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.902 [2024-11-20 07:26:13.816706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.902 [2024-11-20 07:26:13.816714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.816988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.816995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.903 [2024-11-20 07:26:13.817376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.903 [2024-11-20 07:26:13.817384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:13.817718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.904 [2024-11-20 07:26:13.817735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.904 [2024-11-20 07:26:13.817752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.904 [2024-11-20 07:26:13.817769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.904 [2024-11-20 07:26:13.817786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.904 [2024-11-20 07:26:13.817803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.904 [2024-11-20 07:26:13.817820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.904 [2024-11-20 07:26:13.817845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.904 [2024-11-20 07:26:13.817852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96352 len:8 PRP1 0x0 PRP2 0x0 00:25:53.904 [2024-11-20 07:26:13.817860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:13.817909] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:53.904 [2024-11-20 07:26:13.817920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:53.904 [2024-11-20 07:26:13.821426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:53.904 [2024-11-20 07:26:13.821448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22efd80 (9): Bad file descriptor 00:25:53.904 [2024-11-20 07:26:13.937131] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:53.904 10629.50 IOPS, 41.52 MiB/s [2024-11-20T06:26:28.671Z] 10864.67 IOPS, 42.44 MiB/s [2024-11-20T06:26:28.671Z] 11009.00 IOPS, 43.00 MiB/s [2024-11-20T06:26:28.671Z] [2024-11-20 07:26:17.324156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.904 [2024-11-20 07:26:17.324191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:17.324207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:17.324220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:17.324230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:17.324238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:17.324248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:17.324257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:17.324267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:17.324274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:17.324283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:17.324291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:17.324301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:17.324308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:17.324318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:17.324326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.904 [2024-11-20 07:26:17.324335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.904 [2024-11-20 07:26:17.324343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.905 [2024-11-20 07:26:17.324845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.905 [2024-11-20 07:26:17.324866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.905 [2024-11-20 07:26:17.324884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.905 [2024-11-20 07:26:17.324902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.905 [2024-11-20 07:26:17.324912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.905 [2024-11-20 07:26:17.324919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.324930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.324937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.324948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.324955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.324964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.324972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.324981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.324989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.324999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.906 [2024-11-20 07:26:17.325194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.906 [2024-11-20 07:26:17.325576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.906 [2024-11-20 07:26:17.325583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.325985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.325992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.907 [2024-11-20 07:26:17.326255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.907 [2024-11-20 07:26:17.326262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:17.326272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.908 [2024-11-20 07:26:17.326279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:17.326288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.908 [2024-11-20 07:26:17.326295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:17.326304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.908 [2024-11-20 07:26:17.326312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:17.326322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.908 [2024-11-20 07:26:17.326329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:17.326338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.908 [2024-11-20 07:26:17.326345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:17.326354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.908 [2024-11-20 07:26:17.326362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:17.326381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.908 [2024-11-20 07:26:17.326388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.908 [2024-11-20 07:26:17.326399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46432 len:8 PRP1 0x0 PRP2 0x0 00:25:53.908 [2024-11-20 07:26:17.326407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:17.326445] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:53.908 [2024-11-20 07:26:17.326466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.908 [2024-11-20 07:26:17.326474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:17.326483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.908 [2024-11-20 07:26:17.326491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:17.326499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.908 [2024-11-20 07:26:17.326506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:17.326514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.908 [2024-11-20 07:26:17.326522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:17.326530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:53.908 [2024-11-20 07:26:17.330106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:53.908 [2024-11-20 07:26:17.330132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22efd80 (9): Bad file descriptor 00:25:53.908 [2024-11-20 07:26:17.364448] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:53.908 10961.60 IOPS, 42.82 MiB/s [2024-11-20T06:26:28.675Z] 10988.67 IOPS, 42.92 MiB/s [2024-11-20T06:26:28.675Z] 11018.00 IOPS, 43.04 MiB/s [2024-11-20T06:26:28.675Z] 11032.50 IOPS, 43.10 MiB/s [2024-11-20T06:26:28.675Z] [2024-11-20 07:26:21.706296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.908 [2024-11-20 07:26:21.706333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.908 [2024-11-20 07:26:21.706753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.908 [2024-11-20 07:26:21.706760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.706777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.706793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.706810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.706827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.706843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.706866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.706883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.706900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.706916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.706933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.706949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.706967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.706984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.706993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.909 [2024-11-20 07:26:21.707421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.909 [2024-11-20 07:26:21.707442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.707450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52232 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.707457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.910 [2024-11-20 07:26:21.707503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.910 [2024-11-20 07:26:21.707519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.910 [2024-11-20 07:26:21.707536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.910 [2024-11-20 07:26:21.707554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22efd80 is same with the state(6) to be set 00:25:53.910 [2024-11-20 07:26:21.707712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.707720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.707727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52240 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.707734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.707749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.707755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52248 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.707762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.707776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.707782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52256 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.707789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.707802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.707808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52264 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.707815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.707829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.707835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52272 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.707842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.707855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.707861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52280 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.707873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.707887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.707893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52288 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.707902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.707915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.707922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52296 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.707929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.707942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.707948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52304 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.707956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.707969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.707975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52312 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.707982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.707990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.707996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.708002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52320 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.708009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.708016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.708022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.708029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52328 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.708036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.708043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.708049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.708054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52336 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.708062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.708069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.708075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.708081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52344 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.708088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.708096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.708102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.708109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52352 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.708116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.708125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.708130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.708137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52360 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.708144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.708151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.910 [2024-11-20 07:26:21.708157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.910 [2024-11-20 07:26:21.708163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52368 len:8 PRP1 0x0 PRP2 0x0 00:25:53.910 [2024-11-20 07:26:21.708170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.910 [2024-11-20 07:26:21.708178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52376 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52384 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52392 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52400 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52408 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52416 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52424 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52432 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52440 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52448 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52456 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52464 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52472 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52480 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52488 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52496 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52504 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52512 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52520 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52528 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52536 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52544 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.911 [2024-11-20 07:26:21.708782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52552 len:8 PRP1 0x0 PRP2 0x0 00:25:53.911 [2024-11-20 07:26:21.708789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.911 [2024-11-20 07:26:21.708797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.911 [2024-11-20 07:26:21.708802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.708808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52560 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.708816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.708823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.708829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.708835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52568 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.708842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.708850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.708855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.708865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52576 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.708872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.708880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.708886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.708892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52584 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.708898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.708906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.708911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.708917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52592 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.708925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.708932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.708939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.708945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52600 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.708952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.708960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.708965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52608 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52616 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52624 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52632 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52640 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52648 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52656 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52664 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52672 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52680 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52688 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52696 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52704 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52712 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52720 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52728 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52736 len:8 PRP1 0x0 PRP2 0x0 00:25:53.912 [2024-11-20 07:26:21.718927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.912 [2024-11-20 07:26:21.718935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.912 [2024-11-20 07:26:21.718940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.912 [2024-11-20 07:26:21.718946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51720 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.718954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.718961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.718967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.718973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51728 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.718980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.718988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.718993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51736 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51744 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51752 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51760 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51768 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51776 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51784 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51792 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51800 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51808 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51816 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51824 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51832 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51840 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51848 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51856 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51864 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51872 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51880 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51888 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51896 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.913 [2024-11-20 07:26:21.719552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.913 [2024-11-20 07:26:21.719557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.913 [2024-11-20 07:26:21.719563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51904 len:8 PRP1 0x0 PRP2 0x0 00:25:53.913 [2024-11-20 07:26:21.719570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51912 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51920 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51928 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51936 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51944 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51952 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51960 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51968 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51976 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51984 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51992 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52000 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52008 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52016 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52024 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.719975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.719983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.719988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.719994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52032 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.720002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.720010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.720015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.720021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52040 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.720028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.720036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.720042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.720048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52048 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.720055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.720062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.720068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.720074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52056 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.720081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.720089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.914 [2024-11-20 07:26:21.720094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.914 [2024-11-20 07:26:21.720100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52064 len:8 PRP1 0x0 PRP2 0x0 00:25:53.914 [2024-11-20 07:26:21.720107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.914 [2024-11-20 07:26:21.720116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.720122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.720128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52072 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.720135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.720143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.720148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.720154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52080 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.720161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.720169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.720174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.720180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52088 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.720188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.720196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.720201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.720207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52096 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.720215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.720226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.720231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.720237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52104 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52112 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52120 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52128 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52136 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52144 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52152 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52160 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52168 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52176 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52184 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52192 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52200 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52208 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.727979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.727985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.727991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52216 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.727998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.728006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.728011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.728018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52224 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.728025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.728033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.915 [2024-11-20 07:26:21.728039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.915 [2024-11-20 07:26:21.728045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52232 len:8 PRP1 0x0 PRP2 0x0 00:25:53.915 [2024-11-20 07:26:21.728052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.915 [2024-11-20 07:26:21.728094] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:53.915 [2024-11-20 07:26:21.728104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:53.915 [2024-11-20 07:26:21.728147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22efd80 (9): Bad file descriptor 00:25:53.915 [2024-11-20 07:26:21.731625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:53.915 [2024-11-20 07:26:21.755965] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:53.915 10987.67 IOPS, 42.92 MiB/s [2024-11-20T06:26:28.682Z] 11027.80 IOPS, 43.08 MiB/s [2024-11-20T06:26:28.682Z] 11059.91 IOPS, 43.20 MiB/s [2024-11-20T06:26:28.682Z] 11072.67 IOPS, 43.25 MiB/s [2024-11-20T06:26:28.682Z] 11082.62 IOPS, 43.29 MiB/s [2024-11-20T06:26:28.682Z] 11113.71 IOPS, 43.41 MiB/s 00:25:53.915 Latency(us) 00:25:53.915 [2024-11-20T06:26:28.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.916 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:53.916 Verification LBA range: start 0x0 length 0x4000 00:25:53.916 NVMe0n1 : 15.01 11129.13 43.47 479.04 0.00 10999.29 768.00 30583.47 00:25:53.916 [2024-11-20T06:26:28.683Z] =================================================================================================================== 00:25:53.916 [2024-11-20T06:26:28.683Z] Total : 11129.13 43.47 479.04 0.00 10999.29 768.00 30583.47 00:25:53.916 Received shutdown signal, test time was about 15.000000 seconds 00:25:53.916 00:25:53.916 Latency(us) 00:25:53.916 [2024-11-20T06:26:28.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.916 [2024-11-20T06:26:28.683Z] =================================================================================================================== 00:25:53.916 [2024-11-20T06:26:28.683Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:53.916 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:53.916 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:53.916 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:53.916 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1407787 00:25:53.916 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1407787 /var/tmp/bdevperf.sock 00:25:53.916 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:53.916 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1407787 ']' 00:25:53.916 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:53.916 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:53.916 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:53.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:53.916 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:53.916 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:54.178 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:54.178 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:54.178 07:26:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:54.439 [2024-11-20 07:26:29.009803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:54.439 07:26:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:54.439 [2024-11-20 07:26:29.194265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:54.700 07:26:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:54.961 NVMe0n1 00:25:54.961 07:26:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:55.222 00:25:55.483 07:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:55.745 00:25:55.745 07:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:55.745 07:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:55.745 07:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:56.006 07:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:59.312 07:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:59.312 07:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:59.312 07:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1408802 00:25:59.312 07:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:59.312 07:26:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1408802 00:26:00.255 { 00:26:00.255 "results": [ 00:26:00.255 { 00:26:00.255 "job": "NVMe0n1", 00:26:00.255 "core_mask": "0x1", 00:26:00.255 "workload": "verify", 00:26:00.255 "status": "finished", 00:26:00.255 "verify_range": { 00:26:00.256 "start": 0, 00:26:00.256 "length": 16384 00:26:00.256 }, 00:26:00.256 "queue_depth": 128, 00:26:00.256 "io_size": 4096, 00:26:00.256 "runtime": 1.003881, 00:26:00.256 "iops": 11255.318110413486, 00:26:00.256 "mibps": 43.96608636880268, 00:26:00.256 "io_failed": 0, 00:26:00.256 "io_timeout": 0, 00:26:00.256 "avg_latency_us": 11319.52746260731, 00:26:00.256 "min_latency_us": 2607.786666666667, 00:26:00.256 "max_latency_us": 15510.186666666666 00:26:00.256 } 00:26:00.256 ], 00:26:00.256 "core_count": 1 00:26:00.256 } 00:26:00.256 07:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:00.256 [2024-11-20 07:26:28.057707] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:26:00.256 [2024-11-20 07:26:28.057767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407787 ] 00:26:00.256 [2024-11-20 07:26:28.135808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.256 [2024-11-20 07:26:28.171492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.256 [2024-11-20 07:26:30.623542] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:00.256 [2024-11-20 07:26:30.623591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.256 [2024-11-20 07:26:30.623603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.256 [2024-11-20 07:26:30.623614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.256 [2024-11-20 07:26:30.623621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.256 [2024-11-20 07:26:30.623630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.256 [2024-11-20 07:26:30.623637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.256 [2024-11-20 07:26:30.623645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.256 [2024-11-20 07:26:30.623652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.256 [2024-11-20 07:26:30.623660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:00.256 [2024-11-20 07:26:30.623688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:00.256 [2024-11-20 07:26:30.623703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a23d80 (9): Bad file descriptor 00:26:00.256 [2024-11-20 07:26:30.671998] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:00.256 Running I/O for 1 seconds... 00:26:00.256 11171.00 IOPS, 43.64 MiB/s 00:26:00.256 Latency(us) 00:26:00.256 [2024-11-20T06:26:35.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.256 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:00.256 Verification LBA range: start 0x0 length 0x4000 00:26:00.256 NVMe0n1 : 1.00 11255.32 43.97 0.00 0.00 11319.53 2607.79 15510.19 00:26:00.256 [2024-11-20T06:26:35.023Z] =================================================================================================================== 00:26:00.256 [2024-11-20T06:26:35.023Z] Total : 11255.32 43.97 0.00 0.00 11319.53 2607.79 15510.19 00:26:00.256 07:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:00.256 07:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:00.516 07:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:00.777 07:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:00.777 07:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:00.777 07:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:01.037 07:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:04.335 07:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:04.335 07:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:04.335 07:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1407787 00:26:04.336 07:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1407787 ']' 00:26:04.336 07:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1407787 00:26:04.336 07:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:26:04.336 07:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:04.336 07:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1407787 00:26:04.336 07:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:04.336 07:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:04.336 07:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1407787' 00:26:04.336 killing process with pid 1407787 00:26:04.336 07:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1407787 00:26:04.336 07:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1407787 00:26:04.336 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:04.336 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:04.596 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:04.596 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:04.596 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:04.596 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:04.596 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:04.596 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:04.596 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:04.596 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:04.596 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:04.596 rmmod nvme_tcp 00:26:04.596 rmmod nvme_fabrics 00:26:04.596 rmmod nvme_keyring 00:26:04.596 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:04.597 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:04.597 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:04.597 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1404063 ']' 00:26:04.597 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1404063 00:26:04.597 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1404063 ']' 00:26:04.597 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1404063 00:26:04.597 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:26:04.597 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:04.597 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1404063 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1404063' 00:26:04.857 killing process with pid 1404063 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1404063 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1404063 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.857 07:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:07.400 00:26:07.400 real 0m40.688s 00:26:07.400 user 2m3.752s 00:26:07.400 sys 0m9.017s 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:07.400 ************************************ 00:26:07.400 END TEST nvmf_failover 00:26:07.400 ************************************ 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.400 ************************************ 00:26:07.400 START TEST nvmf_host_discovery 00:26:07.400 ************************************ 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:07.400 * Looking for test storage... 00:26:07.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:07.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.400 --rc genhtml_branch_coverage=1 00:26:07.400 --rc genhtml_function_coverage=1 00:26:07.400 --rc genhtml_legend=1 00:26:07.400 --rc geninfo_all_blocks=1 00:26:07.400 --rc geninfo_unexecuted_blocks=1 00:26:07.400 00:26:07.400 ' 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:07.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.400 --rc genhtml_branch_coverage=1 00:26:07.400 --rc genhtml_function_coverage=1 00:26:07.400 --rc genhtml_legend=1 00:26:07.400 --rc geninfo_all_blocks=1 00:26:07.400 --rc geninfo_unexecuted_blocks=1 00:26:07.400 00:26:07.400 ' 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:07.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.400 --rc genhtml_branch_coverage=1 00:26:07.400 --rc genhtml_function_coverage=1 00:26:07.400 --rc genhtml_legend=1 00:26:07.400 --rc geninfo_all_blocks=1 00:26:07.400 --rc geninfo_unexecuted_blocks=1 00:26:07.400 00:26:07.400 ' 00:26:07.400 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:07.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.400 --rc genhtml_branch_coverage=1 00:26:07.400 --rc genhtml_function_coverage=1 00:26:07.400 --rc genhtml_legend=1 00:26:07.400 --rc geninfo_all_blocks=1 00:26:07.400 --rc geninfo_unexecuted_blocks=1 00:26:07.400 00:26:07.400 ' 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:07.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:07.401 07:26:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:15.542 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:15.542 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:15.542 Found net devices under 0000:31:00.0: cvl_0_0 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:15.542 Found net devices under 0000:31:00.1: cvl_0_1 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:15.542 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:15.543 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:15.543 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:15.543 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:15.543 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:15.543 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.543 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:15.543 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:15.543 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:15.543 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:15.543 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:15.543 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.543 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:15.543 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:15.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:26:15.804 00:26:15.804 --- 10.0.0.2 ping statistics --- 00:26:15.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.804 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:15.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:26:15.804 00:26:15.804 --- 10.0.0.1 ping statistics --- 00:26:15.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.804 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1414597 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1414597 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 1414597 ']' 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:15.804 07:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.804 [2024-11-20 07:26:50.495929] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:26:15.804 [2024-11-20 07:26:50.496004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.065 [2024-11-20 07:26:50.603433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.065 [2024-11-20 07:26:50.654903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.065 [2024-11-20 07:26:50.654955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.065 [2024-11-20 07:26:50.654964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.065 [2024-11-20 07:26:50.654971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.065 [2024-11-20 07:26:50.654977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.065 [2024-11-20 07:26:50.655743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.638 [2024-11-20 07:26:51.351289] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.638 [2024-11-20 07:26:51.363536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.638 null0 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.638 null1 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.638 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.899 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.899 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1414847 00:26:16.899 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:16.899 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1414847 /tmp/host.sock 00:26:16.900 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 1414847 ']' 00:26:16.900 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:16.900 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:16.900 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:16.900 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:16.900 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:16.900 07:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.900 [2024-11-20 07:26:51.460072] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:26:16.900 [2024-11-20 07:26:51.460133] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414847 ] 00:26:16.900 [2024-11-20 07:26:51.542733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.900 [2024-11-20 07:26:51.584627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:17.843 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.844 [2024-11-20 07:26:52.602645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.844 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:26:18.106 07:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:26:18.677 [2024-11-20 07:26:53.280035] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:18.677 [2024-11-20 07:26:53.280054] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:18.677 [2024-11-20 07:26:53.280068] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:18.677 [2024-11-20 07:26:53.366332] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:18.938 [2024-11-20 07:26:53.589627] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:18.938 [2024-11-20 07:26:53.590566] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1fe6670:1 started. 00:26:18.938 [2024-11-20 07:26:53.592165] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:18.938 [2024-11-20 07:26:53.592182] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:18.938 [2024-11-20 07:26:53.598993] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1fe6670 was disconnected and freed. delete nvme_qpair. 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:19.199 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 07:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:19.462 [2024-11-20 07:26:54.044326] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1fe6850:1 started. 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 [2024-11-20 07:26:54.049555] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1fe6850 was disconnected and freed. delete nvme_qpair. 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 [2024-11-20 07:26:54.150818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:19.462 [2024-11-20 07:26:54.151544] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:19.462 [2024-11-20 07:26:54.151567] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:19.724 [2024-11-20 07:26:54.239848] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:19.724 07:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:26:19.985 [2024-11-20 07:26:54.509316] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:19.985 [2024-11-20 07:26:54.509359] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:19.985 [2024-11-20 07:26:54.509370] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:19.985 [2024-11-20 07:26:54.509375] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:20.557 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:20.557 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:20.557 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.821 [2024-11-20 07:26:55.422773] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:20.821 [2024-11-20 07:26:55.422795] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:20.821 [2024-11-20 07:26:55.424133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.821 [2024-11-20 07:26:55.424152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-11-20 07:26:55.424162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.821 [2024-11-20 07:26:55.424170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-11-20 07:26:55.424178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.821 [2024-11-20 07:26:55.424186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-11-20 07:26:55.424194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.821 [2024-11-20 07:26:55.424201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.821 [2024-11-20 07:26:55.424208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6d90 is same with the state(6) to be set 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.821 [2024-11-20 07:26:55.434144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6d90 (9): Bad file descriptor 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:20.821 [2024-11-20 07:26:55.444181] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:20.821 [2024-11-20 07:26:55.444195] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:20.821 [2024-11-20 07:26:55.444200] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:20.821 [2024-11-20 07:26:55.444205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:20.821 [2024-11-20 07:26:55.444224] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:20.821 [2024-11-20 07:26:55.444514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.821 [2024-11-20 07:26:55.444529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6d90 with addr=10.0.0.2, port=4420 00:26:20.821 [2024-11-20 07:26:55.444537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6d90 is same with the state(6) to be set 00:26:20.821 [2024-11-20 07:26:55.444549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6d90 (9): Bad file descriptor 00:26:20.821 [2024-11-20 07:26:55.444561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:20.821 [2024-11-20 07:26:55.444568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:20.821 [2024-11-20 07:26:55.444576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:20.821 [2024-11-20 07:26:55.444583] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:20.821 [2024-11-20 07:26:55.444588] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:20.821 [2024-11-20 07:26:55.444593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:20.821 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.821 [2024-11-20 07:26:55.454255] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:20.821 [2024-11-20 07:26:55.454267] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:20.821 [2024-11-20 07:26:55.454271] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:20.821 [2024-11-20 07:26:55.454276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:20.821 [2024-11-20 07:26:55.454290] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:20.821 [2024-11-20 07:26:55.454487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.821 [2024-11-20 07:26:55.454499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6d90 with addr=10.0.0.2, port=4420 00:26:20.821 [2024-11-20 07:26:55.454506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6d90 is same with the state(6) to be set 00:26:20.821 [2024-11-20 07:26:55.454518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6d90 (9): Bad file descriptor 00:26:20.821 [2024-11-20 07:26:55.454529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:20.821 [2024-11-20 07:26:55.454540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:20.821 [2024-11-20 07:26:55.454548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:20.822 [2024-11-20 07:26:55.454554] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:20.822 [2024-11-20 07:26:55.454559] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:20.822 [2024-11-20 07:26:55.454563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:20.822 [2024-11-20 07:26:55.464322] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:20.822 [2024-11-20 07:26:55.464333] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:20.822 [2024-11-20 07:26:55.464338] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:20.822 [2024-11-20 07:26:55.464342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:20.822 [2024-11-20 07:26:55.464356] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:20.822 [2024-11-20 07:26:55.464647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-11-20 07:26:55.464659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6d90 with addr=10.0.0.2, port=4420 00:26:20.822 [2024-11-20 07:26:55.464666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6d90 is same with the state(6) to be set 00:26:20.822 [2024-11-20 07:26:55.464677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6d90 (9): Bad file descriptor 00:26:20.822 [2024-11-20 07:26:55.464688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:20.822 [2024-11-20 07:26:55.464694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:20.822 [2024-11-20 07:26:55.464701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:20.822 [2024-11-20 07:26:55.464707] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:20.822 [2024-11-20 07:26:55.464712] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:20.822 [2024-11-20 07:26:55.464716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:20.822 [2024-11-20 07:26:55.474388] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:20.822 [2024-11-20 07:26:55.474401] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:20.822 [2024-11-20 07:26:55.474406] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:20.822 [2024-11-20 07:26:55.474411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:20.822 [2024-11-20 07:26:55.474426] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:20.822 [2024-11-20 07:26:55.474713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-11-20 07:26:55.474726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6d90 with addr=10.0.0.2, port=4420 00:26:20.822 [2024-11-20 07:26:55.474733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6d90 is same with the state(6) to be set 00:26:20.822 [2024-11-20 07:26:55.474745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6d90 (9): Bad file descriptor 00:26:20.822 [2024-11-20 07:26:55.474763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:20.822 [2024-11-20 07:26:55.474769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:20.822 [2024-11-20 07:26:55.474777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:20.822 [2024-11-20 07:26:55.474783] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:20.822 [2024-11-20 07:26:55.474787] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:20.822 [2024-11-20 07:26:55.474792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:20.822 [2024-11-20 07:26:55.484457] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:20.822 [2024-11-20 07:26:55.484468] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:20.822 [2024-11-20 07:26:55.484473] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:20.822 [2024-11-20 07:26:55.484477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:20.822 [2024-11-20 07:26:55.484491] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:20.822 [2024-11-20 07:26:55.484678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-11-20 07:26:55.484690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6d90 with addr=10.0.0.2, port=4420 00:26:20.822 [2024-11-20 07:26:55.484699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6d90 is same with the state(6) to be set 00:26:20.822 [2024-11-20 07:26:55.484711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6d90 (9): Bad file descriptor 00:26:20.822 [2024-11-20 07:26:55.484722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:20.822 [2024-11-20 07:26:55.484729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:20.822 [2024-11-20 07:26:55.484736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:20.822 [2024-11-20 07:26:55.484742] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:20.822 [2024-11-20 07:26:55.484747] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:20.822 [2024-11-20 07:26:55.484751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.822 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:20.822 [2024-11-20 07:26:55.494521] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:20.822 [2024-11-20 07:26:55.494537] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:20.822 [2024-11-20 07:26:55.494542] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:20.822 [2024-11-20 07:26:55.494546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:20.822 [2024-11-20 07:26:55.494562] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:20.822 [2024-11-20 07:26:55.494846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-11-20 07:26:55.494858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6d90 with addr=10.0.0.2, port=4420 00:26:20.822 [2024-11-20 07:26:55.494873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6d90 is same with the state(6) to be set 00:26:20.822 [2024-11-20 07:26:55.494884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6d90 (9): Bad file descriptor 00:26:20.822 [2024-11-20 07:26:55.494895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:20.822 [2024-11-20 07:26:55.494902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:20.822 [2024-11-20 07:26:55.494910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:20.822 [2024-11-20 07:26:55.494916] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:20.822 [2024-11-20 07:26:55.494921] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:20.822 [2024-11-20 07:26:55.494925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:20.822 [2024-11-20 07:26:55.504594] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:20.822 [2024-11-20 07:26:55.504606] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:20.822 [2024-11-20 07:26:55.504611] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:20.822 [2024-11-20 07:26:55.504615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:20.822 [2024-11-20 07:26:55.504629] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:20.822 [2024-11-20 07:26:55.505074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-11-20 07:26:55.505113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6d90 with addr=10.0.0.2, port=4420 00:26:20.822 [2024-11-20 07:26:55.505124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6d90 is same with the state(6) to be set 00:26:20.822 [2024-11-20 07:26:55.505143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6d90 (9): Bad file descriptor 00:26:20.822 [2024-11-20 07:26:55.505171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:20.822 [2024-11-20 07:26:55.505180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:20.822 [2024-11-20 07:26:55.505193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:20.822 [2024-11-20 07:26:55.505200] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:20.823 [2024-11-20 07:26:55.505206] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:20.823 [2024-11-20 07:26:55.505211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:20.823 [2024-11-20 07:26:55.514663] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:20.823 [2024-11-20 07:26:55.514680] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:20.823 [2024-11-20 07:26:55.514685] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:20.823 [2024-11-20 07:26:55.514690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:20.823 [2024-11-20 07:26:55.514708] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:20.823 [2024-11-20 07:26:55.515149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-11-20 07:26:55.515186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6d90 with addr=10.0.0.2, port=4420 00:26:20.823 [2024-11-20 07:26:55.515197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6d90 is same with the state(6) to be set 00:26:20.823 [2024-11-20 07:26:55.515216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6d90 (9): Bad file descriptor 00:26:20.823 [2024-11-20 07:26:55.515242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:20.823 [2024-11-20 07:26:55.515250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:20.823 [2024-11-20 07:26:55.515258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:20.823 [2024-11-20 07:26:55.515265] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:20.823 [2024-11-20 07:26:55.515271] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:20.823 [2024-11-20 07:26:55.515275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:20.823 [2024-11-20 07:26:55.524741] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:20.823 [2024-11-20 07:26:55.524756] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:20.823 [2024-11-20 07:26:55.524761] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:20.823 [2024-11-20 07:26:55.524766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:20.823 [2024-11-20 07:26:55.524782] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:20.823 [2024-11-20 07:26:55.524977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-11-20 07:26:55.524991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6d90 with addr=10.0.0.2, port=4420 00:26:20.823 [2024-11-20 07:26:55.524998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6d90 is same with the state(6) to be set 00:26:20.823 [2024-11-20 07:26:55.525010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6d90 (9): Bad file descriptor 00:26:20.823 [2024-11-20 07:26:55.525020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:20.823 [2024-11-20 07:26:55.525031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:20.823 [2024-11-20 07:26:55.525039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:20.823 [2024-11-20 07:26:55.525045] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:20.823 [2024-11-20 07:26:55.525050] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:20.823 [2024-11-20 07:26:55.525055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:20.823 [2024-11-20 07:26:55.534813] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:20.823 [2024-11-20 07:26:55.534825] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:20.823 [2024-11-20 07:26:55.534830] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:20.823 [2024-11-20 07:26:55.534835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:20.823 [2024-11-20 07:26:55.534848] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:20.823 [2024-11-20 07:26:55.535146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-11-20 07:26:55.535158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6d90 with addr=10.0.0.2, port=4420 00:26:20.823 [2024-11-20 07:26:55.535165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6d90 is same with the state(6) to be set 00:26:20.823 [2024-11-20 07:26:55.535176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6d90 (9): Bad file descriptor 00:26:20.823 [2024-11-20 07:26:55.535186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:20.823 [2024-11-20 07:26:55.535193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:20.823 [2024-11-20 07:26:55.535200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:20.823 [2024-11-20 07:26:55.535206] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:20.823 [2024-11-20 07:26:55.535210] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:20.823 [2024-11-20 07:26:55.535215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:20.823 [2024-11-20 07:26:55.544880] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:20.823 [2024-11-20 07:26:55.544892] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:20.823 [2024-11-20 07:26:55.544896] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:20.823 [2024-11-20 07:26:55.544901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:20.823 [2024-11-20 07:26:55.544914] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:20.823 [2024-11-20 07:26:55.545279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-11-20 07:26:55.545289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6d90 with addr=10.0.0.2, port=4420 00:26:20.823 [2024-11-20 07:26:55.545297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6d90 is same with the state(6) to be set 00:26:20.823 [2024-11-20 07:26:55.545308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6d90 (9): Bad file descriptor 00:26:20.823 [2024-11-20 07:26:55.545318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:20.823 [2024-11-20 07:26:55.545324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:20.823 [2024-11-20 07:26:55.545331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:20.823 [2024-11-20 07:26:55.545337] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:20.823 [2024-11-20 07:26:55.545342] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:20.823 [2024-11-20 07:26:55.545346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:20.823 [2024-11-20 07:26:55.550361] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:20.823 [2024-11-20 07:26:55.550381] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:20.823 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.085 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:26:21.085 07:26:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:22.027 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.028 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.289 07:26:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.239 [2024-11-20 07:26:57.881721] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:23.239 [2024-11-20 07:26:57.881739] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:23.239 [2024-11-20 07:26:57.881751] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:23.239 [2024-11-20 07:26:57.971052] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:23.813 [2024-11-20 07:26:58.278602] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:23.813 [2024-11-20 07:26:58.279222] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1fb4570:1 started. 00:26:23.813 [2024-11-20 07:26:58.281059] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:23.813 [2024-11-20 07:26:58.281087] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:23.813 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.813 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:23.813 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:23.813 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:23.813 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:23.813 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:23.813 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:23.813 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:23.813 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:23.813 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.814 request: 00:26:23.814 { 00:26:23.814 "name": "nvme", 00:26:23.814 "trtype": "tcp", 00:26:23.814 "traddr": "10.0.0.2", 00:26:23.814 "adrfam": "ipv4", 00:26:23.814 "trsvcid": "8009", 00:26:23.814 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:23.814 "wait_for_attach": true, 00:26:23.814 "method": "bdev_nvme_start_discovery", 00:26:23.814 "req_id": 1 00:26:23.814 } 00:26:23.814 Got JSON-RPC error response 00:26:23.814 response: 00:26:23.814 { 00:26:23.814 "code": -17, 00:26:23.814 "message": "File exists" 00:26:23.814 } 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.814 [2024-11-20 07:26:58.330521] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1fb4570 was disconnected and freed. delete nvme_qpair. 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.814 request: 00:26:23.814 { 00:26:23.814 "name": "nvme_second", 00:26:23.814 "trtype": "tcp", 00:26:23.814 "traddr": "10.0.0.2", 00:26:23.814 "adrfam": "ipv4", 00:26:23.814 "trsvcid": "8009", 00:26:23.814 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:23.814 "wait_for_attach": true, 00:26:23.814 "method": "bdev_nvme_start_discovery", 00:26:23.814 "req_id": 1 00:26:23.814 } 00:26:23.814 Got JSON-RPC error response 00:26:23.814 response: 00:26:23.814 { 00:26:23.814 "code": -17, 00:26:23.814 "message": "File exists" 00:26:23.814 } 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.814 07:26:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.195 [2024-11-20 07:26:59.541096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.195 [2024-11-20 07:26:59.541125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fcec00 with addr=10.0.0.2, port=8010 00:26:25.195 [2024-11-20 07:26:59.541138] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:25.195 [2024-11-20 07:26:59.541146] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:25.195 [2024-11-20 07:26:59.541152] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:26.138 [2024-11-20 07:27:00.543569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.139 [2024-11-20 07:27:00.543596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fcec00 with addr=10.0.0.2, port=8010 00:26:26.139 [2024-11-20 07:27:00.543608] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:26.139 [2024-11-20 07:27:00.543615] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:26.139 [2024-11-20 07:27:00.543623] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:27.081 [2024-11-20 07:27:01.545564] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:27.081 request: 00:26:27.081 { 00:26:27.081 "name": "nvme_second", 00:26:27.081 "trtype": "tcp", 00:26:27.081 "traddr": "10.0.0.2", 00:26:27.081 "adrfam": "ipv4", 00:26:27.081 "trsvcid": "8010", 00:26:27.081 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:27.081 "wait_for_attach": false, 00:26:27.081 "attach_timeout_ms": 3000, 00:26:27.081 "method": "bdev_nvme_start_discovery", 00:26:27.081 "req_id": 1 00:26:27.081 } 00:26:27.081 Got JSON-RPC error response 00:26:27.081 response: 00:26:27.081 { 00:26:27.081 "code": -110, 00:26:27.081 "message": "Connection timed out" 00:26:27.081 } 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1414847 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:27.081 rmmod nvme_tcp 00:26:27.081 rmmod nvme_fabrics 00:26:27.081 rmmod nvme_keyring 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1414597 ']' 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1414597 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 1414597 ']' 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 1414597 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1414597 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1414597' 00:26:27.081 killing process with pid 1414597 00:26:27.081 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 1414597 00:26:27.082 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 1414597 00:26:27.082 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:27.082 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:27.082 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:27.082 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:27.082 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:27.082 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:27.082 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:27.344 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:27.344 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:27.344 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.344 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.344 07:27:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.258 07:27:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:29.258 00:26:29.258 real 0m22.244s 00:26:29.258 user 0m25.723s 00:26:29.258 sys 0m7.930s 00:26:29.258 07:27:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:29.258 07:27:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.258 ************************************ 00:26:29.258 END TEST nvmf_host_discovery 00:26:29.258 ************************************ 00:26:29.258 07:27:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:29.258 07:27:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:29.258 07:27:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:29.258 07:27:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.258 ************************************ 00:26:29.258 START TEST nvmf_host_multipath_status 00:26:29.258 ************************************ 00:26:29.258 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:29.520 * Looking for test storage... 00:26:29.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.520 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:29.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.521 --rc genhtml_branch_coverage=1 00:26:29.521 --rc genhtml_function_coverage=1 00:26:29.521 --rc genhtml_legend=1 00:26:29.521 --rc geninfo_all_blocks=1 00:26:29.521 --rc geninfo_unexecuted_blocks=1 00:26:29.521 00:26:29.521 ' 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:29.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.521 --rc genhtml_branch_coverage=1 00:26:29.521 --rc genhtml_function_coverage=1 00:26:29.521 --rc genhtml_legend=1 00:26:29.521 --rc geninfo_all_blocks=1 00:26:29.521 --rc geninfo_unexecuted_blocks=1 00:26:29.521 00:26:29.521 ' 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:29.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.521 --rc genhtml_branch_coverage=1 00:26:29.521 --rc genhtml_function_coverage=1 00:26:29.521 --rc genhtml_legend=1 00:26:29.521 --rc geninfo_all_blocks=1 00:26:29.521 --rc geninfo_unexecuted_blocks=1 00:26:29.521 00:26:29.521 ' 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:29.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.521 --rc genhtml_branch_coverage=1 00:26:29.521 --rc genhtml_function_coverage=1 00:26:29.521 --rc genhtml_legend=1 00:26:29.521 --rc geninfo_all_blocks=1 00:26:29.521 --rc geninfo_unexecuted_blocks=1 00:26:29.521 00:26:29.521 ' 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.521 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:29.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:29.522 07:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:39.528 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:39.528 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.528 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:39.529 Found net devices under 0000:31:00.0: cvl_0_0 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:39.529 Found net devices under 0000:31:00.1: cvl_0_1 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:39.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:26:39.529 00:26:39.529 --- 10.0.0.2 ping statistics --- 00:26:39.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.529 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:26:39.529 00:26:39.529 --- 10.0.0.1 ping statistics --- 00:26:39.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.529 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1422274 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1422274 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1422274 ']' 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:39.529 07:27:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:39.529 [2024-11-20 07:27:12.847086] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:26:39.529 [2024-11-20 07:27:12.847133] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.529 [2024-11-20 07:27:12.932605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:39.529 [2024-11-20 07:27:12.967774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.529 [2024-11-20 07:27:12.967810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.529 [2024-11-20 07:27:12.967818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.529 [2024-11-20 07:27:12.967825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.529 [2024-11-20 07:27:12.967831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.529 [2024-11-20 07:27:12.969047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.529 [2024-11-20 07:27:12.969134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.529 07:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:39.529 07:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:39.529 07:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:39.529 07:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:39.529 07:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:39.529 07:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.529 07:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1422274 00:26:39.529 07:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:39.529 [2024-11-20 07:27:13.823200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.529 07:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:39.529 Malloc0 00:26:39.530 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:39.530 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:39.789 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.789 [2024-11-20 07:27:14.507307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.789 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:40.049 [2024-11-20 07:27:14.671685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:40.049 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1422637 00:26:40.049 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:40.049 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:40.049 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1422637 /var/tmp/bdevperf.sock 00:26:40.049 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1422637 ']' 00:26:40.049 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:40.049 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:40.049 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:40.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:40.049 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:40.049 07:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:40.991 07:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:40.991 07:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:40.991 07:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:40.991 07:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:41.562 Nvme0n1 00:26:41.562 07:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:41.823 Nvme0n1 00:26:41.823 07:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:41.823 07:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:44.370 07:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:44.370 07:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:44.370 07:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:44.370 07:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:45.312 07:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:45.312 07:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:45.312 07:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.312 07:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:45.573 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.573 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:45.573 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.573 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:45.573 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:45.573 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:45.573 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.573 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:45.847 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.847 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:45.847 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.847 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:46.141 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.141 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:46.141 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.141 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:46.141 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.141 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:46.141 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.141 07:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:46.456 07:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.456 07:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:46.456 07:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:46.456 07:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:46.717 07:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:47.660 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:47.660 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:47.660 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.660 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:47.922 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:47.922 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:47.922 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.922 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:48.183 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.184 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:48.184 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.184 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.184 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.184 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.184 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.184 07:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.444 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.444 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.444 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.444 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:48.704 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.704 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:48.704 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.705 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:48.965 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.965 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:48.965 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:48.965 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:49.223 07:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:50.163 07:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:50.163 07:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:50.163 07:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.163 07:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.424 07:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.424 07:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:50.424 07:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.424 07:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.424 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.424 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.424 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.424 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:50.685 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.685 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:50.685 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.685 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:50.946 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.946 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:50.946 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.946 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:51.207 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.207 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:51.207 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.207 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:51.207 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.207 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:51.207 07:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:51.467 07:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:51.728 07:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:52.667 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:52.667 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:52.667 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.667 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:52.667 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.667 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:52.929 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.929 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:52.929 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:52.929 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:52.929 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.929 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:53.189 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.189 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:53.189 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.189 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:53.451 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.451 07:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:53.451 07:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.451 07:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:53.451 07:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.451 07:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:53.451 07:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.451 07:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:53.711 07:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.711 07:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:53.711 07:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:53.971 07:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:54.230 07:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:55.171 07:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:55.171 07:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:55.171 07:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.171 07:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:55.431 07:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.431 07:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:55.431 07:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.431 07:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:55.431 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.431 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:55.431 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.431 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:55.691 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.691 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:55.691 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.691 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:55.952 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.952 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:55.952 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.952 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:55.952 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.952 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:55.952 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.952 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:56.212 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:56.212 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:56.212 07:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:56.471 07:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:56.471 07:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:57.852 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:57.852 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:57.852 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:57.852 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.852 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:57.852 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:57.852 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.852 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:57.852 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.852 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:57.852 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.852 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:58.113 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.113 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:58.113 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.113 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:58.374 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.374 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:58.374 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.374 07:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:58.374 07:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.374 07:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:58.374 07:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.374 07:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:58.634 07:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.634 07:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:58.893 07:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:58.893 07:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:59.152 07:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:59.152 07:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:00.533 07:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:00.533 07:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:00.533 07:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.533 07:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:00.533 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.533 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:00.533 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.533 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:00.533 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.533 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:00.533 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.533 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:00.794 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.794 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:00.794 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.794 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:01.053 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.053 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:01.053 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.053 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:01.053 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.053 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:01.053 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.053 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:01.313 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.313 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:01.313 07:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:01.573 07:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:01.833 07:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:02.772 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:02.772 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:02.772 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.772 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:02.772 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.772 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:03.032 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.032 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:03.032 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.032 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:03.032 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:03.032 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.292 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.292 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:03.292 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.292 07:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:03.553 07:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.553 07:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:03.553 07:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.553 07:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:03.553 07:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.553 07:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:03.553 07:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.553 07:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:03.812 07:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.812 07:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:03.812 07:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:04.071 07:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:04.071 07:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:05.454 07:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:05.454 07:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:05.454 07:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.454 07:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:05.454 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.454 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:05.454 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.454 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:05.454 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.454 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:05.454 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.454 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:05.715 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.715 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:05.715 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.715 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:05.975 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.975 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:05.975 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:05.975 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.235 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.235 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:06.235 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.235 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:06.235 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.235 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:06.235 07:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:06.494 07:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:06.754 07:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:07.694 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:07.694 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:07.694 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.694 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:07.955 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.955 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:07.955 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.955 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:07.955 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:07.955 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:07.955 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.955 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:08.216 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.216 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:08.216 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.216 07:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:08.477 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.477 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:08.477 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.477 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1422637 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1422637 ']' 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1422637 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1422637 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1422637' 00:27:08.738 killing process with pid 1422637 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1422637 00:27:08.738 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1422637 00:27:08.738 { 00:27:08.738 "results": [ 00:27:08.738 { 00:27:08.738 "job": "Nvme0n1", 00:27:08.738 "core_mask": "0x4", 00:27:08.738 "workload": "verify", 00:27:08.738 "status": "terminated", 00:27:08.738 "verify_range": { 00:27:08.738 "start": 0, 00:27:08.738 "length": 16384 00:27:08.738 }, 00:27:08.738 "queue_depth": 128, 00:27:08.738 "io_size": 4096, 00:27:08.738 "runtime": 26.826195, 00:27:08.738 "iops": 10872.880033862424, 00:27:08.738 "mibps": 42.472187632275094, 00:27:08.738 "io_failed": 0, 00:27:08.738 "io_timeout": 0, 00:27:08.738 "avg_latency_us": 11753.671769371247, 00:27:08.738 "min_latency_us": 237.22666666666666, 00:27:08.738 "max_latency_us": 3019898.88 00:27:08.738 } 00:27:08.738 ], 00:27:08.738 "core_count": 1 00:27:08.738 } 00:27:09.001 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1422637 00:27:09.001 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:09.002 [2024-11-20 07:27:14.737236] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:27:09.002 [2024-11-20 07:27:14.737300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422637 ] 00:27:09.002 [2024-11-20 07:27:14.801869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.002 [2024-11-20 07:27:14.830999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.002 Running I/O for 90 seconds... 00:27:09.002 9465.00 IOPS, 36.97 MiB/s [2024-11-20T06:27:43.769Z] 9590.50 IOPS, 37.46 MiB/s [2024-11-20T06:27:43.769Z] 9606.67 IOPS, 37.53 MiB/s [2024-11-20T06:27:43.769Z] 9610.00 IOPS, 37.54 MiB/s [2024-11-20T06:27:43.769Z] 9879.40 IOPS, 38.59 MiB/s [2024-11-20T06:27:43.769Z] 10444.50 IOPS, 40.80 MiB/s [2024-11-20T06:27:43.769Z] 10815.29 IOPS, 42.25 MiB/s [2024-11-20T06:27:43.769Z] 10770.50 IOPS, 42.07 MiB/s [2024-11-20T06:27:43.769Z] 10645.89 IOPS, 41.59 MiB/s [2024-11-20T06:27:43.769Z] 10550.00 IOPS, 41.21 MiB/s [2024-11-20T06:27:43.769Z] 10478.00 IOPS, 40.93 MiB/s [2024-11-20T06:27:43.769Z] [2024-11-20 07:27:28.528462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.528988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.002 [2024-11-20 07:27:28.528993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.529004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.002 [2024-11-20 07:27:28.529010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.529021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.002 [2024-11-20 07:27:28.529026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.529037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.002 [2024-11-20 07:27:28.529042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.529054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.002 [2024-11-20 07:27:28.529063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.529075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.002 [2024-11-20 07:27:28.529080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:09.002 [2024-11-20 07:27:28.529091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.003 [2024-11-20 07:27:28.529097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.003 [2024-11-20 07:27:28.529113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.003 [2024-11-20 07:27:28.529298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.003 [2024-11-20 07:27:28.529315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.529797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.529802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.530011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.530018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.530032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.530039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.530053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.003 [2024-11-20 07:27:28.530058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:09.003 [2024-11-20 07:27:28.530071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.530076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.530094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.530112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.530130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.530149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.530497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.530516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.530535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.004 [2024-11-20 07:27:28.530554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.004 [2024-11-20 07:27:28.530572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.004 [2024-11-20 07:27:28.530591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.004 [2024-11-20 07:27:28.530611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.004 [2024-11-20 07:27:28.530630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.004 [2024-11-20 07:27:28.530648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.004 [2024-11-20 07:27:28.530666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.530685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.530704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.530722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.530741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.530755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.530760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.532009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.532030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.532049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.532073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.532092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.532111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.532131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.532150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.532169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.532188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.532207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.532227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.004 [2024-11-20 07:27:28.532246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:09.004 [2024-11-20 07:27:28.532260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.005 [2024-11-20 07:27:28.532842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.005 [2024-11-20 07:27:28.532972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:09.005 [2024-11-20 07:27:28.532988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.006 [2024-11-20 07:27:28.532994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:28.533010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.006 [2024-11-20 07:27:28.533015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:09.006 10328.92 IOPS, 40.35 MiB/s [2024-11-20T06:27:43.773Z] 9534.38 IOPS, 37.24 MiB/s [2024-11-20T06:27:43.773Z] 8853.36 IOPS, 34.58 MiB/s [2024-11-20T06:27:43.773Z] 8356.60 IOPS, 32.64 MiB/s [2024-11-20T06:27:43.773Z] 8654.69 IOPS, 33.81 MiB/s [2024-11-20T06:27:43.773Z] 8922.18 IOPS, 34.85 MiB/s [2024-11-20T06:27:43.773Z] 9357.78 IOPS, 36.55 MiB/s [2024-11-20T06:27:43.773Z] 9758.95 IOPS, 38.12 MiB/s [2024-11-20T06:27:43.773Z] 10029.70 IOPS, 39.18 MiB/s [2024-11-20T06:27:43.773Z] 10171.76 IOPS, 39.73 MiB/s [2024-11-20T06:27:43.773Z] 10301.50 IOPS, 40.24 MiB/s [2024-11-20T06:27:43.773Z] 10561.17 IOPS, 41.25 MiB/s [2024-11-20T06:27:43.773Z] 10828.88 IOPS, 42.30 MiB/s [2024-11-20T06:27:43.773Z] [2024-11-20 07:27:41.295765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.006 [2024-11-20 07:27:41.295802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.295835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.295842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.295853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.295866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.295877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.295882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.295894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.295904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.295919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.295924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.295934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.295940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.295950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.295955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.295965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.295970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.295981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.295986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.295996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.296001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.296017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.296033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.296048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.296064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.296082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.296097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.296112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.006 [2024-11-20 07:27:41.296127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.006 [2024-11-20 07:27:41.296144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.006 [2024-11-20 07:27:41.296159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.006 [2024-11-20 07:27:41.296175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.006 [2024-11-20 07:27:41.296190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.006 [2024-11-20 07:27:41.296206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.006 [2024-11-20 07:27:41.296223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.006 [2024-11-20 07:27:41.296239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.006 [2024-11-20 07:27:41.296255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:09.006 [2024-11-20 07:27:41.296271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.007 [2024-11-20 07:27:41.296277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.007 [2024-11-20 07:27:41.296292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.007 [2024-11-20 07:27:41.296546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.007 [2024-11-20 07:27:41.296564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.007 [2024-11-20 07:27:41.296583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.007 [2024-11-20 07:27:41.296602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.007 [2024-11-20 07:27:41.296617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.007 [2024-11-20 07:27:41.296636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.007 [2024-11-20 07:27:41.296654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.007 [2024-11-20 07:27:41.296670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.007 [2024-11-20 07:27:41.296685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.007 [2024-11-20 07:27:41.296700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.007 [2024-11-20 07:27:41.296718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.007 [2024-11-20 07:27:41.296733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.007 [2024-11-20 07:27:41.296749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.007 [2024-11-20 07:27:41.296764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:09.007 [2024-11-20 07:27:41.296775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.007 [2024-11-20 07:27:41.296780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:09.007 10966.16 IOPS, 42.84 MiB/s [2024-11-20T06:27:43.774Z] 10913.31 IOPS, 42.63 MiB/s [2024-11-20T06:27:43.774Z] Received shutdown signal, test time was about 26.826802 seconds 00:27:09.007 00:27:09.007 Latency(us) 00:27:09.007 [2024-11-20T06:27:43.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.007 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:09.007 Verification LBA range: start 0x0 length 0x4000 00:27:09.007 Nvme0n1 : 26.83 10872.88 42.47 0.00 0.00 11753.67 237.23 3019898.88 00:27:09.007 [2024-11-20T06:27:43.774Z] =================================================================================================================== 00:27:09.007 [2024-11-20T06:27:43.774Z] Total : 10872.88 42.47 0.00 0.00 11753.67 237.23 3019898.88 00:27:09.007 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:09.268 rmmod nvme_tcp 00:27:09.268 rmmod nvme_fabrics 00:27:09.268 rmmod nvme_keyring 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1422274 ']' 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1422274 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1422274 ']' 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1422274 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1422274 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1422274' 00:27:09.268 killing process with pid 1422274 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1422274 00:27:09.268 07:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1422274 00:27:09.528 07:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:09.528 07:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:09.528 07:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:09.528 07:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:09.528 07:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:09.528 07:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:09.528 07:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:09.528 07:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:09.528 07:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:09.528 07:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.528 07:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.528 07:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.441 07:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:11.441 00:27:11.441 real 0m42.118s 00:27:11.441 user 1m46.586s 00:27:11.441 sys 0m12.210s 00:27:11.441 07:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:11.441 07:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:11.441 ************************************ 00:27:11.441 END TEST nvmf_host_multipath_status 00:27:11.441 ************************************ 00:27:11.441 07:27:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:11.441 07:27:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:11.441 07:27:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:11.441 07:27:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.441 ************************************ 00:27:11.441 START TEST nvmf_discovery_remove_ifc 00:27:11.441 ************************************ 00:27:11.441 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:11.703 * Looking for test storage... 00:27:11.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.703 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:11.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.704 --rc genhtml_branch_coverage=1 00:27:11.704 --rc genhtml_function_coverage=1 00:27:11.704 --rc genhtml_legend=1 00:27:11.704 --rc geninfo_all_blocks=1 00:27:11.704 --rc geninfo_unexecuted_blocks=1 00:27:11.704 00:27:11.704 ' 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:11.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.704 --rc genhtml_branch_coverage=1 00:27:11.704 --rc genhtml_function_coverage=1 00:27:11.704 --rc genhtml_legend=1 00:27:11.704 --rc geninfo_all_blocks=1 00:27:11.704 --rc geninfo_unexecuted_blocks=1 00:27:11.704 00:27:11.704 ' 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:11.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.704 --rc genhtml_branch_coverage=1 00:27:11.704 --rc genhtml_function_coverage=1 00:27:11.704 --rc genhtml_legend=1 00:27:11.704 --rc geninfo_all_blocks=1 00:27:11.704 --rc geninfo_unexecuted_blocks=1 00:27:11.704 00:27:11.704 ' 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:11.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.704 --rc genhtml_branch_coverage=1 00:27:11.704 --rc genhtml_function_coverage=1 00:27:11.704 --rc genhtml_legend=1 00:27:11.704 --rc geninfo_all_blocks=1 00:27:11.704 --rc geninfo_unexecuted_blocks=1 00:27:11.704 00:27:11.704 ' 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.704 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:11.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:11.705 07:27:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:19.864 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:19.864 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:19.864 Found net devices under 0000:31:00.0: cvl_0_0 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:19.864 Found net devices under 0000:31:00.1: cvl_0_1 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:19.864 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:20.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:27:20.125 00:27:20.125 --- 10.0.0.2 ping statistics --- 00:27:20.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.125 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:27:20.125 00:27:20.125 --- 10.0.0.1 ping statistics --- 00:27:20.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.125 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1433204 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1433204 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 1433204 ']' 00:27:20.125 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.126 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:20.126 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.126 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:20.126 07:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.126 [2024-11-20 07:27:54.837514] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:27:20.126 [2024-11-20 07:27:54.837572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.386 [2024-11-20 07:27:54.941796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.386 [2024-11-20 07:27:54.991870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.386 [2024-11-20 07:27:54.991919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.386 [2024-11-20 07:27:54.991928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.386 [2024-11-20 07:27:54.991935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.387 [2024-11-20 07:27:54.991941] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.387 [2024-11-20 07:27:54.992706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.958 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:20.958 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:27:20.958 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:20.958 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:20.958 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.958 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.958 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:20.958 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.958 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.958 [2024-11-20 07:27:55.698675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.958 [2024-11-20 07:27:55.706936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:20.958 null0 00:27:21.221 [2024-11-20 07:27:55.738887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.221 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.221 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1433272 00:27:21.221 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1433272 /tmp/host.sock 00:27:21.221 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:21.221 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 1433272 ']' 00:27:21.221 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:27:21.221 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:21.221 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:21.221 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:21.221 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:21.221 07:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:21.221 [2024-11-20 07:27:55.817418] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:27:21.221 [2024-11-20 07:27:55.817482] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433272 ] 00:27:21.221 [2024-11-20 07:27:55.900033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.221 [2024-11-20 07:27:55.941341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.164 07:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.106 [2024-11-20 07:27:57.757899] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:23.106 [2024-11-20 07:27:57.757921] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:23.106 [2024-11-20 07:27:57.757934] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:23.367 [2024-11-20 07:27:57.886372] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:23.367 [2024-11-20 07:27:58.108590] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:23.367 [2024-11-20 07:27:58.109631] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf1d690:1 started. 00:27:23.367 [2024-11-20 07:27:58.111246] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:23.367 [2024-11-20 07:27:58.111292] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:23.367 [2024-11-20 07:27:58.111312] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:23.367 [2024-11-20 07:27:58.111325] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:23.367 [2024-11-20 07:27:58.111345] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:23.367 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.367 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:23.367 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:23.367 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.367 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:23.367 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.367 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:23.367 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.367 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:23.627 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.627 [2024-11-20 07:27:58.157630] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf1d690 was disconnected and freed. delete nvme_qpair. 00:27:23.627 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:23.628 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:23.628 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:23.628 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:23.628 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:23.628 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.628 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:23.628 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.628 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:23.628 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.628 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:23.628 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.628 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:23.628 07:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:25.008 07:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.008 07:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.008 07:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.008 07:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.008 07:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.008 07:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.008 07:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.008 07:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.008 07:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:25.008 07:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:25.949 07:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.949 07:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.950 07:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.950 07:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.950 07:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.950 07:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.950 07:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.950 07:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.950 07:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:25.950 07:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:26.891 07:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.892 07:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:26.892 07:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.892 07:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.892 07:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.892 07:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.892 07:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.892 07:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.892 07:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:26.892 07:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:27.833 07:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:27.833 07:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:27.833 07:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.833 07:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:27.833 07:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.833 07:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:27.833 07:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.833 07:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.833 07:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:27.833 07:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:29.217 [2024-11-20 07:28:03.551892] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:29.217 [2024-11-20 07:28:03.551938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.217 [2024-11-20 07:28:03.551951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.217 [2024-11-20 07:28:03.551961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.217 [2024-11-20 07:28:03.551969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.217 [2024-11-20 07:28:03.551977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.217 [2024-11-20 07:28:03.551984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.217 [2024-11-20 07:28:03.551993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.217 [2024-11-20 07:28:03.552000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.217 [2024-11-20 07:28:03.552008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.217 [2024-11-20 07:28:03.552016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.217 [2024-11-20 07:28:03.552023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefa050 is same with the state(6) to be set 00:27:29.217 [2024-11-20 07:28:03.561906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefa050 (9): Bad file descriptor 00:27:29.217 07:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:29.217 07:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.217 07:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:29.217 [2024-11-20 07:28:03.571946] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:29.217 [2024-11-20 07:28:03.571962] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:29.217 07:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.217 [2024-11-20 07:28:03.571968] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:29.217 [2024-11-20 07:28:03.571979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:29.217 [2024-11-20 07:28:03.572000] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:29.217 07:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:29.217 07:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:29.217 07:28:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:30.156 [2024-11-20 07:28:04.635894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:30.156 [2024-11-20 07:28:04.635932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa050 with addr=10.0.0.2, port=4420 00:27:30.156 [2024-11-20 07:28:04.635944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefa050 is same with the state(6) to be set 00:27:30.156 [2024-11-20 07:28:04.635964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefa050 (9): Bad file descriptor 00:27:30.156 [2024-11-20 07:28:04.636331] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:30.156 [2024-11-20 07:28:04.636354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:30.156 [2024-11-20 07:28:04.636362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:30.156 [2024-11-20 07:28:04.636370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:30.156 [2024-11-20 07:28:04.636378] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:30.157 [2024-11-20 07:28:04.636384] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:30.157 [2024-11-20 07:28:04.636389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:30.157 [2024-11-20 07:28:04.636396] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:30.157 [2024-11-20 07:28:04.636401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:30.157 07:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.157 07:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:30.157 07:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:31.098 [2024-11-20 07:28:05.638771] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:31.098 [2024-11-20 07:28:05.638791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:31.098 [2024-11-20 07:28:05.638802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:31.098 [2024-11-20 07:28:05.638810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:31.098 [2024-11-20 07:28:05.638818] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:31.098 [2024-11-20 07:28:05.638825] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:31.098 [2024-11-20 07:28:05.638830] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:31.098 [2024-11-20 07:28:05.638839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:31.098 [2024-11-20 07:28:05.638860] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:31.098 [2024-11-20 07:28:05.638885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.098 [2024-11-20 07:28:05.638895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.098 [2024-11-20 07:28:05.638905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.098 [2024-11-20 07:28:05.638913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.098 [2024-11-20 07:28:05.638921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.098 [2024-11-20 07:28:05.638928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.098 [2024-11-20 07:28:05.638936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.098 [2024-11-20 07:28:05.638944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.098 [2024-11-20 07:28:05.638952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.098 [2024-11-20 07:28:05.638959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.098 [2024-11-20 07:28:05.638966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:31.098 [2024-11-20 07:28:05.639330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee9380 (9): Bad file descriptor 00:27:31.098 [2024-11-20 07:28:05.640342] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:31.098 [2024-11-20 07:28:05.640354] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.098 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.099 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.099 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.358 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:31.358 07:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:32.298 07:28:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:32.298 07:28:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.298 07:28:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:32.298 07:28:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.298 07:28:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:32.298 07:28:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:32.298 07:28:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:32.298 07:28:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.298 07:28:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:32.298 07:28:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:33.290 [2024-11-20 07:28:07.693069] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:33.290 [2024-11-20 07:28:07.693089] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:33.290 [2024-11-20 07:28:07.693103] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:33.290 [2024-11-20 07:28:07.779360] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:33.290 07:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.290 07:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.290 07:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.290 07:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.290 07:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.290 07:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.290 07:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.290 07:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.290 [2024-11-20 07:28:07.960508] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:33.290 [2024-11-20 07:28:07.961397] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xf04820:1 started. 00:27:33.290 [2024-11-20 07:28:07.962631] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:33.290 [2024-11-20 07:28:07.962666] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:33.290 [2024-11-20 07:28:07.962686] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:33.290 [2024-11-20 07:28:07.962702] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:33.290 [2024-11-20 07:28:07.962710] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:33.290 [2024-11-20 07:28:07.971244] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xf04820 was disconnected and freed. delete nvme_qpair. 00:27:33.290 07:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:33.290 07:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:34.289 07:28:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:34.289 07:28:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.289 07:28:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:34.289 07:28:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.289 07:28:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:34.289 07:28:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:34.289 07:28:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:34.289 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.289 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:34.289 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:34.289 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1433272 00:27:34.289 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 1433272 ']' 00:27:34.289 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 1433272 00:27:34.289 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:34.289 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:34.289 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1433272 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1433272' 00:27:34.547 killing process with pid 1433272 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 1433272 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 1433272 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:34.547 rmmod nvme_tcp 00:27:34.547 rmmod nvme_fabrics 00:27:34.547 rmmod nvme_keyring 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1433204 ']' 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1433204 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 1433204 ']' 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 1433204 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:34.547 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1433204 00:27:34.806 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:34.806 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:34.806 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1433204' 00:27:34.806 killing process with pid 1433204 00:27:34.806 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 1433204 00:27:34.806 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 1433204 00:27:34.807 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:34.807 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:34.807 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:34.807 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:34.807 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:34.807 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:34.807 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:34.807 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:34.807 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:34.807 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.807 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.807 07:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.343 07:28:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:37.343 00:27:37.343 real 0m25.324s 00:27:37.343 user 0m29.686s 00:27:37.343 sys 0m7.853s 00:27:37.343 07:28:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:37.343 07:28:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:37.343 ************************************ 00:27:37.343 END TEST nvmf_discovery_remove_ifc 00:27:37.343 ************************************ 00:27:37.343 07:28:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:37.343 07:28:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:37.343 07:28:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:37.343 07:28:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.343 ************************************ 00:27:37.343 START TEST nvmf_identify_kernel_target 00:27:37.343 ************************************ 00:27:37.343 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:37.343 * Looking for test storage... 00:27:37.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:37.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.344 --rc genhtml_branch_coverage=1 00:27:37.344 --rc genhtml_function_coverage=1 00:27:37.344 --rc genhtml_legend=1 00:27:37.344 --rc geninfo_all_blocks=1 00:27:37.344 --rc geninfo_unexecuted_blocks=1 00:27:37.344 00:27:37.344 ' 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:37.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.344 --rc genhtml_branch_coverage=1 00:27:37.344 --rc genhtml_function_coverage=1 00:27:37.344 --rc genhtml_legend=1 00:27:37.344 --rc geninfo_all_blocks=1 00:27:37.344 --rc geninfo_unexecuted_blocks=1 00:27:37.344 00:27:37.344 ' 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:37.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.344 --rc genhtml_branch_coverage=1 00:27:37.344 --rc genhtml_function_coverage=1 00:27:37.344 --rc genhtml_legend=1 00:27:37.344 --rc geninfo_all_blocks=1 00:27:37.344 --rc geninfo_unexecuted_blocks=1 00:27:37.344 00:27:37.344 ' 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:37.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.344 --rc genhtml_branch_coverage=1 00:27:37.344 --rc genhtml_function_coverage=1 00:27:37.344 --rc genhtml_legend=1 00:27:37.344 --rc geninfo_all_blocks=1 00:27:37.344 --rc geninfo_unexecuted_blocks=1 00:27:37.344 00:27:37.344 ' 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.344 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:37.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:37.345 07:28:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:45.478 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:45.478 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.478 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:45.479 Found net devices under 0000:31:00.0: cvl_0_0 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:45.479 Found net devices under 0000:31:00.1: cvl_0_1 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.479 07:28:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.479 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.479 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.479 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.479 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.479 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.479 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.479 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.739 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:27:45.739 00:27:45.739 --- 10.0.0.2 ping statistics --- 00:27:45.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.739 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:27:45.739 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:27:45.739 00:27:45.739 --- 10.0.0.1 ping statistics --- 00:27:45.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.739 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:27:45.739 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.739 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:45.739 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.739 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.739 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.739 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.739 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.739 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.739 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:45.740 07:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:49.036 Waiting for block devices as requested 00:27:49.296 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:49.296 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:49.296 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:49.556 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:49.556 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:49.556 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:49.556 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:49.817 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:49.817 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:50.077 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:50.077 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:50.077 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:50.337 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:50.337 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:50.337 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:50.337 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:50.596 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:50.856 No valid GPT data, bailing 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:50.856 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:50.856 00:27:50.856 Discovery Log Number of Records 2, Generation counter 2 00:27:50.856 =====Discovery Log Entry 0====== 00:27:50.856 trtype: tcp 00:27:50.856 adrfam: ipv4 00:27:50.856 subtype: current discovery subsystem 00:27:50.856 treq: not specified, sq flow control disable supported 00:27:50.856 portid: 1 00:27:50.856 trsvcid: 4420 00:27:50.856 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:50.856 traddr: 10.0.0.1 00:27:50.856 eflags: none 00:27:50.856 sectype: none 00:27:50.856 =====Discovery Log Entry 1====== 00:27:50.856 trtype: tcp 00:27:50.856 adrfam: ipv4 00:27:50.856 subtype: nvme subsystem 00:27:50.856 treq: not specified, sq flow control disable supported 00:27:50.856 portid: 1 00:27:50.856 trsvcid: 4420 00:27:50.856 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:50.856 traddr: 10.0.0.1 00:27:50.856 eflags: none 00:27:50.856 sectype: none 00:27:51.117 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:51.117 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:51.117 ===================================================== 00:27:51.117 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:51.117 ===================================================== 00:27:51.117 Controller Capabilities/Features 00:27:51.117 ================================ 00:27:51.117 Vendor ID: 0000 00:27:51.117 Subsystem Vendor ID: 0000 00:27:51.117 Serial Number: 5a28300144a7f0226cbb 00:27:51.117 Model Number: Linux 00:27:51.117 Firmware Version: 6.8.9-20 00:27:51.117 Recommended Arb Burst: 0 00:27:51.117 IEEE OUI Identifier: 00 00 00 00:27:51.117 Multi-path I/O 00:27:51.117 May have multiple subsystem ports: No 00:27:51.117 May have multiple controllers: No 00:27:51.117 Associated with SR-IOV VF: No 00:27:51.117 Max Data Transfer Size: Unlimited 00:27:51.117 Max Number of Namespaces: 0 00:27:51.117 Max Number of I/O Queues: 1024 00:27:51.117 NVMe Specification Version (VS): 1.3 00:27:51.117 NVMe Specification Version (Identify): 1.3 00:27:51.117 Maximum Queue Entries: 1024 00:27:51.117 Contiguous Queues Required: No 00:27:51.117 Arbitration Mechanisms Supported 00:27:51.117 Weighted Round Robin: Not Supported 00:27:51.117 Vendor Specific: Not Supported 00:27:51.117 Reset Timeout: 7500 ms 00:27:51.117 Doorbell Stride: 4 bytes 00:27:51.117 NVM Subsystem Reset: Not Supported 00:27:51.117 Command Sets Supported 00:27:51.117 NVM Command Set: Supported 00:27:51.117 Boot Partition: Not Supported 00:27:51.117 Memory Page Size Minimum: 4096 bytes 00:27:51.117 Memory Page Size Maximum: 4096 bytes 00:27:51.117 Persistent Memory Region: Not Supported 00:27:51.117 Optional Asynchronous Events Supported 00:27:51.117 Namespace Attribute Notices: Not Supported 00:27:51.117 Firmware Activation Notices: Not Supported 00:27:51.117 ANA Change Notices: Not Supported 00:27:51.117 PLE Aggregate Log Change Notices: Not Supported 00:27:51.117 LBA Status Info Alert Notices: Not Supported 00:27:51.117 EGE Aggregate Log Change Notices: Not Supported 00:27:51.117 Normal NVM Subsystem Shutdown event: Not Supported 00:27:51.117 Zone Descriptor Change Notices: Not Supported 00:27:51.117 Discovery Log Change Notices: Supported 00:27:51.117 Controller Attributes 00:27:51.117 128-bit Host Identifier: Not Supported 00:27:51.117 Non-Operational Permissive Mode: Not Supported 00:27:51.117 NVM Sets: Not Supported 00:27:51.117 Read Recovery Levels: Not Supported 00:27:51.117 Endurance Groups: Not Supported 00:27:51.117 Predictable Latency Mode: Not Supported 00:27:51.117 Traffic Based Keep ALive: Not Supported 00:27:51.117 Namespace Granularity: Not Supported 00:27:51.117 SQ Associations: Not Supported 00:27:51.117 UUID List: Not Supported 00:27:51.117 Multi-Domain Subsystem: Not Supported 00:27:51.117 Fixed Capacity Management: Not Supported 00:27:51.117 Variable Capacity Management: Not Supported 00:27:51.117 Delete Endurance Group: Not Supported 00:27:51.117 Delete NVM Set: Not Supported 00:27:51.117 Extended LBA Formats Supported: Not Supported 00:27:51.117 Flexible Data Placement Supported: Not Supported 00:27:51.117 00:27:51.117 Controller Memory Buffer Support 00:27:51.117 ================================ 00:27:51.117 Supported: No 00:27:51.117 00:27:51.117 Persistent Memory Region Support 00:27:51.117 ================================ 00:27:51.117 Supported: No 00:27:51.117 00:27:51.117 Admin Command Set Attributes 00:27:51.117 ============================ 00:27:51.117 Security Send/Receive: Not Supported 00:27:51.117 Format NVM: Not Supported 00:27:51.117 Firmware Activate/Download: Not Supported 00:27:51.117 Namespace Management: Not Supported 00:27:51.117 Device Self-Test: Not Supported 00:27:51.117 Directives: Not Supported 00:27:51.117 NVMe-MI: Not Supported 00:27:51.117 Virtualization Management: Not Supported 00:27:51.117 Doorbell Buffer Config: Not Supported 00:27:51.117 Get LBA Status Capability: Not Supported 00:27:51.117 Command & Feature Lockdown Capability: Not Supported 00:27:51.117 Abort Command Limit: 1 00:27:51.117 Async Event Request Limit: 1 00:27:51.117 Number of Firmware Slots: N/A 00:27:51.117 Firmware Slot 1 Read-Only: N/A 00:27:51.117 Firmware Activation Without Reset: N/A 00:27:51.117 Multiple Update Detection Support: N/A 00:27:51.117 Firmware Update Granularity: No Information Provided 00:27:51.117 Per-Namespace SMART Log: No 00:27:51.117 Asymmetric Namespace Access Log Page: Not Supported 00:27:51.117 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:51.117 Command Effects Log Page: Not Supported 00:27:51.117 Get Log Page Extended Data: Supported 00:27:51.117 Telemetry Log Pages: Not Supported 00:27:51.117 Persistent Event Log Pages: Not Supported 00:27:51.117 Supported Log Pages Log Page: May Support 00:27:51.117 Commands Supported & Effects Log Page: Not Supported 00:27:51.117 Feature Identifiers & Effects Log Page:May Support 00:27:51.118 NVMe-MI Commands & Effects Log Page: May Support 00:27:51.118 Data Area 4 for Telemetry Log: Not Supported 00:27:51.118 Error Log Page Entries Supported: 1 00:27:51.118 Keep Alive: Not Supported 00:27:51.118 00:27:51.118 NVM Command Set Attributes 00:27:51.118 ========================== 00:27:51.118 Submission Queue Entry Size 00:27:51.118 Max: 1 00:27:51.118 Min: 1 00:27:51.118 Completion Queue Entry Size 00:27:51.118 Max: 1 00:27:51.118 Min: 1 00:27:51.118 Number of Namespaces: 0 00:27:51.118 Compare Command: Not Supported 00:27:51.118 Write Uncorrectable Command: Not Supported 00:27:51.118 Dataset Management Command: Not Supported 00:27:51.118 Write Zeroes Command: Not Supported 00:27:51.118 Set Features Save Field: Not Supported 00:27:51.118 Reservations: Not Supported 00:27:51.118 Timestamp: Not Supported 00:27:51.118 Copy: Not Supported 00:27:51.118 Volatile Write Cache: Not Present 00:27:51.118 Atomic Write Unit (Normal): 1 00:27:51.118 Atomic Write Unit (PFail): 1 00:27:51.118 Atomic Compare & Write Unit: 1 00:27:51.118 Fused Compare & Write: Not Supported 00:27:51.118 Scatter-Gather List 00:27:51.118 SGL Command Set: Supported 00:27:51.118 SGL Keyed: Not Supported 00:27:51.118 SGL Bit Bucket Descriptor: Not Supported 00:27:51.118 SGL Metadata Pointer: Not Supported 00:27:51.118 Oversized SGL: Not Supported 00:27:51.118 SGL Metadata Address: Not Supported 00:27:51.118 SGL Offset: Supported 00:27:51.118 Transport SGL Data Block: Not Supported 00:27:51.118 Replay Protected Memory Block: Not Supported 00:27:51.118 00:27:51.118 Firmware Slot Information 00:27:51.118 ========================= 00:27:51.118 Active slot: 0 00:27:51.118 00:27:51.118 00:27:51.118 Error Log 00:27:51.118 ========= 00:27:51.118 00:27:51.118 Active Namespaces 00:27:51.118 ================= 00:27:51.118 Discovery Log Page 00:27:51.118 ================== 00:27:51.118 Generation Counter: 2 00:27:51.118 Number of Records: 2 00:27:51.118 Record Format: 0 00:27:51.118 00:27:51.118 Discovery Log Entry 0 00:27:51.118 ---------------------- 00:27:51.118 Transport Type: 3 (TCP) 00:27:51.118 Address Family: 1 (IPv4) 00:27:51.118 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:51.118 Entry Flags: 00:27:51.118 Duplicate Returned Information: 0 00:27:51.118 Explicit Persistent Connection Support for Discovery: 0 00:27:51.118 Transport Requirements: 00:27:51.118 Secure Channel: Not Specified 00:27:51.118 Port ID: 1 (0x0001) 00:27:51.118 Controller ID: 65535 (0xffff) 00:27:51.118 Admin Max SQ Size: 32 00:27:51.118 Transport Service Identifier: 4420 00:27:51.118 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:51.118 Transport Address: 10.0.0.1 00:27:51.118 Discovery Log Entry 1 00:27:51.118 ---------------------- 00:27:51.118 Transport Type: 3 (TCP) 00:27:51.118 Address Family: 1 (IPv4) 00:27:51.118 Subsystem Type: 2 (NVM Subsystem) 00:27:51.118 Entry Flags: 00:27:51.118 Duplicate Returned Information: 0 00:27:51.118 Explicit Persistent Connection Support for Discovery: 0 00:27:51.118 Transport Requirements: 00:27:51.118 Secure Channel: Not Specified 00:27:51.118 Port ID: 1 (0x0001) 00:27:51.118 Controller ID: 65535 (0xffff) 00:27:51.118 Admin Max SQ Size: 32 00:27:51.118 Transport Service Identifier: 4420 00:27:51.118 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:51.118 Transport Address: 10.0.0.1 00:27:51.118 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:51.118 get_feature(0x01) failed 00:27:51.118 get_feature(0x02) failed 00:27:51.118 get_feature(0x04) failed 00:27:51.118 ===================================================== 00:27:51.118 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:51.118 ===================================================== 00:27:51.118 Controller Capabilities/Features 00:27:51.118 ================================ 00:27:51.118 Vendor ID: 0000 00:27:51.118 Subsystem Vendor ID: 0000 00:27:51.118 Serial Number: 54a13e03d0905c505a1d 00:27:51.118 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:51.118 Firmware Version: 6.8.9-20 00:27:51.118 Recommended Arb Burst: 6 00:27:51.118 IEEE OUI Identifier: 00 00 00 00:27:51.118 Multi-path I/O 00:27:51.118 May have multiple subsystem ports: Yes 00:27:51.118 May have multiple controllers: Yes 00:27:51.118 Associated with SR-IOV VF: No 00:27:51.118 Max Data Transfer Size: Unlimited 00:27:51.118 Max Number of Namespaces: 1024 00:27:51.118 Max Number of I/O Queues: 128 00:27:51.118 NVMe Specification Version (VS): 1.3 00:27:51.118 NVMe Specification Version (Identify): 1.3 00:27:51.118 Maximum Queue Entries: 1024 00:27:51.118 Contiguous Queues Required: No 00:27:51.118 Arbitration Mechanisms Supported 00:27:51.118 Weighted Round Robin: Not Supported 00:27:51.118 Vendor Specific: Not Supported 00:27:51.118 Reset Timeout: 7500 ms 00:27:51.118 Doorbell Stride: 4 bytes 00:27:51.118 NVM Subsystem Reset: Not Supported 00:27:51.118 Command Sets Supported 00:27:51.118 NVM Command Set: Supported 00:27:51.118 Boot Partition: Not Supported 00:27:51.118 Memory Page Size Minimum: 4096 bytes 00:27:51.118 Memory Page Size Maximum: 4096 bytes 00:27:51.118 Persistent Memory Region: Not Supported 00:27:51.118 Optional Asynchronous Events Supported 00:27:51.118 Namespace Attribute Notices: Supported 00:27:51.118 Firmware Activation Notices: Not Supported 00:27:51.118 ANA Change Notices: Supported 00:27:51.118 PLE Aggregate Log Change Notices: Not Supported 00:27:51.118 LBA Status Info Alert Notices: Not Supported 00:27:51.118 EGE Aggregate Log Change Notices: Not Supported 00:27:51.118 Normal NVM Subsystem Shutdown event: Not Supported 00:27:51.118 Zone Descriptor Change Notices: Not Supported 00:27:51.118 Discovery Log Change Notices: Not Supported 00:27:51.118 Controller Attributes 00:27:51.118 128-bit Host Identifier: Supported 00:27:51.118 Non-Operational Permissive Mode: Not Supported 00:27:51.118 NVM Sets: Not Supported 00:27:51.118 Read Recovery Levels: Not Supported 00:27:51.118 Endurance Groups: Not Supported 00:27:51.118 Predictable Latency Mode: Not Supported 00:27:51.118 Traffic Based Keep ALive: Supported 00:27:51.118 Namespace Granularity: Not Supported 00:27:51.118 SQ Associations: Not Supported 00:27:51.118 UUID List: Not Supported 00:27:51.118 Multi-Domain Subsystem: Not Supported 00:27:51.118 Fixed Capacity Management: Not Supported 00:27:51.118 Variable Capacity Management: Not Supported 00:27:51.118 Delete Endurance Group: Not Supported 00:27:51.118 Delete NVM Set: Not Supported 00:27:51.118 Extended LBA Formats Supported: Not Supported 00:27:51.118 Flexible Data Placement Supported: Not Supported 00:27:51.118 00:27:51.118 Controller Memory Buffer Support 00:27:51.118 ================================ 00:27:51.118 Supported: No 00:27:51.118 00:27:51.118 Persistent Memory Region Support 00:27:51.118 ================================ 00:27:51.118 Supported: No 00:27:51.118 00:27:51.118 Admin Command Set Attributes 00:27:51.118 ============================ 00:27:51.118 Security Send/Receive: Not Supported 00:27:51.118 Format NVM: Not Supported 00:27:51.118 Firmware Activate/Download: Not Supported 00:27:51.118 Namespace Management: Not Supported 00:27:51.118 Device Self-Test: Not Supported 00:27:51.118 Directives: Not Supported 00:27:51.118 NVMe-MI: Not Supported 00:27:51.118 Virtualization Management: Not Supported 00:27:51.118 Doorbell Buffer Config: Not Supported 00:27:51.118 Get LBA Status Capability: Not Supported 00:27:51.118 Command & Feature Lockdown Capability: Not Supported 00:27:51.118 Abort Command Limit: 4 00:27:51.118 Async Event Request Limit: 4 00:27:51.118 Number of Firmware Slots: N/A 00:27:51.118 Firmware Slot 1 Read-Only: N/A 00:27:51.118 Firmware Activation Without Reset: N/A 00:27:51.118 Multiple Update Detection Support: N/A 00:27:51.118 Firmware Update Granularity: No Information Provided 00:27:51.118 Per-Namespace SMART Log: Yes 00:27:51.118 Asymmetric Namespace Access Log Page: Supported 00:27:51.118 ANA Transition Time : 10 sec 00:27:51.118 00:27:51.118 Asymmetric Namespace Access Capabilities 00:27:51.118 ANA Optimized State : Supported 00:27:51.118 ANA Non-Optimized State : Supported 00:27:51.118 ANA Inaccessible State : Supported 00:27:51.118 ANA Persistent Loss State : Supported 00:27:51.118 ANA Change State : Supported 00:27:51.118 ANAGRPID is not changed : No 00:27:51.119 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:51.119 00:27:51.119 ANA Group Identifier Maximum : 128 00:27:51.119 Number of ANA Group Identifiers : 128 00:27:51.119 Max Number of Allowed Namespaces : 1024 00:27:51.119 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:51.119 Command Effects Log Page: Supported 00:27:51.119 Get Log Page Extended Data: Supported 00:27:51.119 Telemetry Log Pages: Not Supported 00:27:51.119 Persistent Event Log Pages: Not Supported 00:27:51.119 Supported Log Pages Log Page: May Support 00:27:51.119 Commands Supported & Effects Log Page: Not Supported 00:27:51.119 Feature Identifiers & Effects Log Page:May Support 00:27:51.119 NVMe-MI Commands & Effects Log Page: May Support 00:27:51.119 Data Area 4 for Telemetry Log: Not Supported 00:27:51.119 Error Log Page Entries Supported: 128 00:27:51.119 Keep Alive: Supported 00:27:51.119 Keep Alive Granularity: 1000 ms 00:27:51.119 00:27:51.119 NVM Command Set Attributes 00:27:51.119 ========================== 00:27:51.119 Submission Queue Entry Size 00:27:51.119 Max: 64 00:27:51.119 Min: 64 00:27:51.119 Completion Queue Entry Size 00:27:51.119 Max: 16 00:27:51.119 Min: 16 00:27:51.119 Number of Namespaces: 1024 00:27:51.119 Compare Command: Not Supported 00:27:51.119 Write Uncorrectable Command: Not Supported 00:27:51.119 Dataset Management Command: Supported 00:27:51.119 Write Zeroes Command: Supported 00:27:51.119 Set Features Save Field: Not Supported 00:27:51.119 Reservations: Not Supported 00:27:51.119 Timestamp: Not Supported 00:27:51.119 Copy: Not Supported 00:27:51.119 Volatile Write Cache: Present 00:27:51.119 Atomic Write Unit (Normal): 1 00:27:51.119 Atomic Write Unit (PFail): 1 00:27:51.119 Atomic Compare & Write Unit: 1 00:27:51.119 Fused Compare & Write: Not Supported 00:27:51.119 Scatter-Gather List 00:27:51.119 SGL Command Set: Supported 00:27:51.119 SGL Keyed: Not Supported 00:27:51.119 SGL Bit Bucket Descriptor: Not Supported 00:27:51.119 SGL Metadata Pointer: Not Supported 00:27:51.119 Oversized SGL: Not Supported 00:27:51.119 SGL Metadata Address: Not Supported 00:27:51.119 SGL Offset: Supported 00:27:51.119 Transport SGL Data Block: Not Supported 00:27:51.119 Replay Protected Memory Block: Not Supported 00:27:51.119 00:27:51.119 Firmware Slot Information 00:27:51.119 ========================= 00:27:51.119 Active slot: 0 00:27:51.119 00:27:51.119 Asymmetric Namespace Access 00:27:51.119 =========================== 00:27:51.119 Change Count : 0 00:27:51.119 Number of ANA Group Descriptors : 1 00:27:51.119 ANA Group Descriptor : 0 00:27:51.119 ANA Group ID : 1 00:27:51.119 Number of NSID Values : 1 00:27:51.119 Change Count : 0 00:27:51.119 ANA State : 1 00:27:51.119 Namespace Identifier : 1 00:27:51.119 00:27:51.119 Commands Supported and Effects 00:27:51.119 ============================== 00:27:51.119 Admin Commands 00:27:51.119 -------------- 00:27:51.119 Get Log Page (02h): Supported 00:27:51.119 Identify (06h): Supported 00:27:51.119 Abort (08h): Supported 00:27:51.119 Set Features (09h): Supported 00:27:51.119 Get Features (0Ah): Supported 00:27:51.119 Asynchronous Event Request (0Ch): Supported 00:27:51.119 Keep Alive (18h): Supported 00:27:51.119 I/O Commands 00:27:51.119 ------------ 00:27:51.119 Flush (00h): Supported 00:27:51.119 Write (01h): Supported LBA-Change 00:27:51.119 Read (02h): Supported 00:27:51.119 Write Zeroes (08h): Supported LBA-Change 00:27:51.119 Dataset Management (09h): Supported 00:27:51.119 00:27:51.119 Error Log 00:27:51.119 ========= 00:27:51.119 Entry: 0 00:27:51.119 Error Count: 0x3 00:27:51.119 Submission Queue Id: 0x0 00:27:51.119 Command Id: 0x5 00:27:51.119 Phase Bit: 0 00:27:51.119 Status Code: 0x2 00:27:51.119 Status Code Type: 0x0 00:27:51.119 Do Not Retry: 1 00:27:51.119 Error Location: 0x28 00:27:51.119 LBA: 0x0 00:27:51.119 Namespace: 0x0 00:27:51.119 Vendor Log Page: 0x0 00:27:51.119 ----------- 00:27:51.119 Entry: 1 00:27:51.119 Error Count: 0x2 00:27:51.119 Submission Queue Id: 0x0 00:27:51.119 Command Id: 0x5 00:27:51.119 Phase Bit: 0 00:27:51.119 Status Code: 0x2 00:27:51.119 Status Code Type: 0x0 00:27:51.119 Do Not Retry: 1 00:27:51.119 Error Location: 0x28 00:27:51.119 LBA: 0x0 00:27:51.119 Namespace: 0x0 00:27:51.119 Vendor Log Page: 0x0 00:27:51.119 ----------- 00:27:51.119 Entry: 2 00:27:51.119 Error Count: 0x1 00:27:51.119 Submission Queue Id: 0x0 00:27:51.119 Command Id: 0x4 00:27:51.119 Phase Bit: 0 00:27:51.119 Status Code: 0x2 00:27:51.119 Status Code Type: 0x0 00:27:51.119 Do Not Retry: 1 00:27:51.119 Error Location: 0x28 00:27:51.119 LBA: 0x0 00:27:51.119 Namespace: 0x0 00:27:51.119 Vendor Log Page: 0x0 00:27:51.119 00:27:51.119 Number of Queues 00:27:51.119 ================ 00:27:51.119 Number of I/O Submission Queues: 128 00:27:51.119 Number of I/O Completion Queues: 128 00:27:51.119 00:27:51.119 ZNS Specific Controller Data 00:27:51.119 ============================ 00:27:51.119 Zone Append Size Limit: 0 00:27:51.119 00:27:51.119 00:27:51.119 Active Namespaces 00:27:51.119 ================= 00:27:51.119 get_feature(0x05) failed 00:27:51.119 Namespace ID:1 00:27:51.119 Command Set Identifier: NVM (00h) 00:27:51.119 Deallocate: Supported 00:27:51.119 Deallocated/Unwritten Error: Not Supported 00:27:51.119 Deallocated Read Value: Unknown 00:27:51.119 Deallocate in Write Zeroes: Not Supported 00:27:51.119 Deallocated Guard Field: 0xFFFF 00:27:51.119 Flush: Supported 00:27:51.119 Reservation: Not Supported 00:27:51.119 Namespace Sharing Capabilities: Multiple Controllers 00:27:51.119 Size (in LBAs): 3750748848 (1788GiB) 00:27:51.119 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:51.119 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:51.119 UUID: ee0e4635-69a3-49f6-99e7-c72d6e62a9ee 00:27:51.119 Thin Provisioning: Not Supported 00:27:51.119 Per-NS Atomic Units: Yes 00:27:51.119 Atomic Write Unit (Normal): 8 00:27:51.119 Atomic Write Unit (PFail): 8 00:27:51.119 Preferred Write Granularity: 8 00:27:51.119 Atomic Compare & Write Unit: 8 00:27:51.119 Atomic Boundary Size (Normal): 0 00:27:51.119 Atomic Boundary Size (PFail): 0 00:27:51.119 Atomic Boundary Offset: 0 00:27:51.119 NGUID/EUI64 Never Reused: No 00:27:51.119 ANA group ID: 1 00:27:51.119 Namespace Write Protected: No 00:27:51.119 Number of LBA Formats: 1 00:27:51.119 Current LBA Format: LBA Format #00 00:27:51.119 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:51.119 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:51.119 rmmod nvme_tcp 00:27:51.119 rmmod nvme_fabrics 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.119 07:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.664 07:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:53.664 07:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:53.664 07:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:53.664 07:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:53.664 07:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:53.664 07:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:53.664 07:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:53.664 07:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:53.664 07:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:53.664 07:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:53.664 07:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:56.970 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:56.970 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:57.230 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:57.799 00:27:57.799 real 0m20.697s 00:27:57.799 user 0m5.545s 00:27:57.799 sys 0m12.138s 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.799 ************************************ 00:27:57.799 END TEST nvmf_identify_kernel_target 00:27:57.799 ************************************ 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.799 ************************************ 00:27:57.799 START TEST nvmf_auth_host 00:27:57.799 ************************************ 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:57.799 * Looking for test storage... 00:27:57.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:57.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.799 --rc genhtml_branch_coverage=1 00:27:57.799 --rc genhtml_function_coverage=1 00:27:57.799 --rc genhtml_legend=1 00:27:57.799 --rc geninfo_all_blocks=1 00:27:57.799 --rc geninfo_unexecuted_blocks=1 00:27:57.799 00:27:57.799 ' 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:57.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.799 --rc genhtml_branch_coverage=1 00:27:57.799 --rc genhtml_function_coverage=1 00:27:57.799 --rc genhtml_legend=1 00:27:57.799 --rc geninfo_all_blocks=1 00:27:57.799 --rc geninfo_unexecuted_blocks=1 00:27:57.799 00:27:57.799 ' 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:57.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.799 --rc genhtml_branch_coverage=1 00:27:57.799 --rc genhtml_function_coverage=1 00:27:57.799 --rc genhtml_legend=1 00:27:57.799 --rc geninfo_all_blocks=1 00:27:57.799 --rc geninfo_unexecuted_blocks=1 00:27:57.799 00:27:57.799 ' 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:57.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.799 --rc genhtml_branch_coverage=1 00:27:57.799 --rc genhtml_function_coverage=1 00:27:57.799 --rc genhtml_legend=1 00:27:57.799 --rc geninfo_all_blocks=1 00:27:57.799 --rc geninfo_unexecuted_blocks=1 00:27:57.799 00:27:57.799 ' 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.799 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:58.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.060 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.061 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.061 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:58.061 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:58.061 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:58.061 07:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.191 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.191 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:06.191 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:06.191 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:06.191 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:06.191 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:06.192 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:06.192 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:06.192 Found net devices under 0000:31:00.0: cvl_0_0 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:06.192 Found net devices under 0000:31:00.1: cvl_0_1 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:06.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.725 ms 00:28:06.192 00:28:06.192 --- 10.0.0.2 ping statistics --- 00:28:06.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.192 rtt min/avg/max/mdev = 0.725/0.725/0.725/0.000 ms 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:28:06.192 00:28:06.192 --- 10.0.0.1 ping statistics --- 00:28:06.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.192 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:06.192 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:06.193 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.193 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1449285 00:28:06.193 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1449285 00:28:06.193 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:06.193 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1449285 ']' 00:28:06.193 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.193 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:06.193 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.193 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:06.193 07:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4e7a2d285cfcbf5ed2aa35c9eb3c9c09 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vLw 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4e7a2d285cfcbf5ed2aa35c9eb3c9c09 0 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4e7a2d285cfcbf5ed2aa35c9eb3c9c09 0 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4e7a2d285cfcbf5ed2aa35c9eb3c9c09 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vLw 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vLw 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.vLw 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5613ea1a3c1977cb018c414e06981d2c6da3b7e1c92bda4b8a8eb022907fe771 00:28:07.135 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.XOG 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5613ea1a3c1977cb018c414e06981d2c6da3b7e1c92bda4b8a8eb022907fe771 3 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5613ea1a3c1977cb018c414e06981d2c6da3b7e1c92bda4b8a8eb022907fe771 3 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5613ea1a3c1977cb018c414e06981d2c6da3b7e1c92bda4b8a8eb022907fe771 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.XOG 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.XOG 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.XOG 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=836cf63e373a395e31c2e28073bde5e2dc8a240b2663e02c 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.234 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 836cf63e373a395e31c2e28073bde5e2dc8a240b2663e02c 0 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 836cf63e373a395e31c2e28073bde5e2dc8a240b2663e02c 0 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=836cf63e373a395e31c2e28073bde5e2dc8a240b2663e02c 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:07.396 07:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.234 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.234 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.234 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2c645c5a3b919539dac60574bd08cf1b37028f1a6fb02d19 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Y38 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2c645c5a3b919539dac60574bd08cf1b37028f1a6fb02d19 2 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2c645c5a3b919539dac60574bd08cf1b37028f1a6fb02d19 2 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2c645c5a3b919539dac60574bd08cf1b37028f1a6fb02d19 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Y38 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Y38 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Y38 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6f6e85215ea6650ee636026397d5feab 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.bF7 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6f6e85215ea6650ee636026397d5feab 1 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6f6e85215ea6650ee636026397d5feab 1 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6f6e85215ea6650ee636026397d5feab 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.396 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.bF7 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.bF7 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.bF7 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4d7f005a645d607802342ee3ed6b819b 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.esb 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4d7f005a645d607802342ee3ed6b819b 1 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4d7f005a645d607802342ee3ed6b819b 1 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4d7f005a645d607802342ee3ed6b819b 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:07.397 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.657 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.esb 00:28:07.657 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.esb 00:28:07.657 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.esb 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6495a8aa91b3dcb9143fc97ee1d5ebef7ec0210783431478 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.o6f 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6495a8aa91b3dcb9143fc97ee1d5ebef7ec0210783431478 2 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6495a8aa91b3dcb9143fc97ee1d5ebef7ec0210783431478 2 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6495a8aa91b3dcb9143fc97ee1d5ebef7ec0210783431478 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.o6f 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.o6f 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.o6f 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bd25d6ff3f4bda2b60f0e07ad671611c 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.i5C 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bd25d6ff3f4bda2b60f0e07ad671611c 0 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bd25d6ff3f4bda2b60f0e07ad671611c 0 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bd25d6ff3f4bda2b60f0e07ad671611c 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.i5C 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.i5C 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.i5C 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fad963ad3755188cbd9185c26b0dba2d298063e89d515ad1b75d9045a4c5671d 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.06d 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fad963ad3755188cbd9185c26b0dba2d298063e89d515ad1b75d9045a4c5671d 3 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fad963ad3755188cbd9185c26b0dba2d298063e89d515ad1b75d9045a4c5671d 3 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fad963ad3755188cbd9185c26b0dba2d298063e89d515ad1b75d9045a4c5671d 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.06d 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.06d 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.06d 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1449285 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1449285 ']' 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:07.658 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vLw 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.XOG ]] 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XOG 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.234 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Y38 ]] 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Y38 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.bF7 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.esb ]] 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.esb 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.o6f 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.i5C ]] 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.i5C 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.06d 00:28:07.920 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.180 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.180 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.180 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:08.180 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:08.180 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:08.180 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.180 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.180 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.180 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.180 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:08.181 07:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:12.382 Waiting for block devices as requested 00:28:12.382 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:12.382 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:12.382 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:12.382 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:12.382 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:12.382 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:12.382 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:12.382 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:12.382 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:12.642 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:12.642 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:12.903 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:12.903 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:12.903 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:13.164 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:13.164 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:13.164 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:14.107 No valid GPT data, bailing 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:14.107 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:14.405 00:28:14.405 Discovery Log Number of Records 2, Generation counter 2 00:28:14.405 =====Discovery Log Entry 0====== 00:28:14.405 trtype: tcp 00:28:14.405 adrfam: ipv4 00:28:14.405 subtype: current discovery subsystem 00:28:14.405 treq: not specified, sq flow control disable supported 00:28:14.406 portid: 1 00:28:14.406 trsvcid: 4420 00:28:14.406 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:14.406 traddr: 10.0.0.1 00:28:14.406 eflags: none 00:28:14.406 sectype: none 00:28:14.406 =====Discovery Log Entry 1====== 00:28:14.406 trtype: tcp 00:28:14.406 adrfam: ipv4 00:28:14.406 subtype: nvme subsystem 00:28:14.406 treq: not specified, sq flow control disable supported 00:28:14.406 portid: 1 00:28:14.406 trsvcid: 4420 00:28:14.406 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:14.406 traddr: 10.0.0.1 00:28:14.406 eflags: none 00:28:14.406 sectype: none 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.406 07:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.406 nvme0n1 00:28:14.406 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.406 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.406 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.406 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.406 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.406 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.406 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.406 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.406 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.406 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.668 nvme0n1 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.668 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.930 nvme0n1 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.930 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.192 nvme0n1 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.192 07:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.453 nvme0n1 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.453 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.715 nvme0n1 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.715 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.976 nvme0n1 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.976 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.238 nvme0n1 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.238 07:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.500 nvme0n1 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.500 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.762 nvme0n1 00:28:16.762 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.762 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.762 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.762 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.762 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.762 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.762 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.763 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.024 nvme0n1 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.024 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.025 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.025 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.025 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.025 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.025 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.025 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.025 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.025 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.025 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.025 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.025 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.025 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.025 07:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.285 nvme0n1 00:28:17.285 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.285 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.285 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.285 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.285 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.285 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.547 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.808 nvme0n1 00:28:17.808 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.808 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.809 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.071 nvme0n1 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.071 07:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.332 nvme0n1 00:28:18.332 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.332 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.332 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.332 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.332 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.592 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.853 nvme0n1 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.853 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.854 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.854 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.854 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.854 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.854 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.854 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.854 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.854 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.854 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.854 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.854 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.854 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.424 nvme0n1 00:28:19.424 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.424 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.424 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.424 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.424 07:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.424 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.996 nvme0n1 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:19.996 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.997 07:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.568 nvme0n1 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.568 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.140 nvme0n1 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.140 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.141 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.141 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:21.141 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.141 07:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.712 nvme0n1 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.712 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.713 07:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.285 nvme0n1 00:28:22.285 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.285 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.285 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.285 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.285 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.285 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:22.546 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.547 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.117 nvme0n1 00:28:23.117 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.117 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.117 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.117 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.117 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.118 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:23.378 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.379 07:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.951 nvme0n1 00:28:23.951 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.951 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.951 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.951 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.951 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.951 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.212 07:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.784 nvme0n1 00:28:24.784 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.784 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.784 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.784 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.784 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.784 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.045 07:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.617 nvme0n1 00:28:25.617 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.617 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.617 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.617 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.617 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.617 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:25.877 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.878 nvme0n1 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.878 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.139 nvme0n1 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.139 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:26.140 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.140 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.400 07:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.400 nvme0n1 00:28:26.400 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.400 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.400 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.400 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.401 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.661 nvme0n1 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.661 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.662 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.923 nvme0n1 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.923 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.924 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.924 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.924 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:26.924 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.924 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.185 nvme0n1 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:27.185 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.186 07:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.447 nvme0n1 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.447 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.448 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.709 nvme0n1 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.709 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.970 nvme0n1 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.970 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:27.971 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.971 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.231 nvme0n1 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.231 07:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.491 nvme0n1 00:28:28.492 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.492 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.492 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.492 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.492 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.492 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.752 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.753 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.753 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.753 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.753 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.753 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.753 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.753 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.753 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.013 nvme0n1 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.013 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.014 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.014 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.014 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.014 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.014 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.014 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.274 nvme0n1 00:28:29.274 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.274 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.274 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.274 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.274 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.274 07:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.274 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.275 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:29.275 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.275 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.275 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:29.275 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.275 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.535 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.796 nvme0n1 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.796 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.797 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.797 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.797 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.057 nvme0n1 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:30.057 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.058 07:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.630 nvme0n1 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.630 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.202 nvme0n1 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.202 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.203 07:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.773 nvme0n1 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:31.773 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.774 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.345 nvme0n1 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.345 07:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.916 nvme0n1 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.916 07:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.859 nvme0n1 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.859 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.860 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.860 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.860 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.860 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.860 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.860 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.860 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.860 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.860 07:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.431 nvme0n1 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.431 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.372 nvme0n1 00:28:35.372 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.372 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.372 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.372 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.372 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.372 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.372 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.372 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.372 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.372 07:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.372 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.373 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.314 nvme0n1 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.314 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.315 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.315 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:36.315 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.315 07:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.886 nvme0n1 00:28:36.886 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.886 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.886 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.886 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.886 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.886 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.147 nvme0n1 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.147 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.148 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.148 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.148 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.148 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.148 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.148 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:37.409 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.410 07:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.410 nvme0n1 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.410 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.671 nvme0n1 00:28:37.671 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.671 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.671 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.671 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.671 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.672 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.933 nvme0n1 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.933 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.934 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.934 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.934 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.934 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:37.934 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.934 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.195 nvme0n1 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.195 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.196 07:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.457 nvme0n1 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.457 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.717 nvme0n1 00:28:38.717 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.717 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.717 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.717 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.717 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.717 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.717 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.718 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.978 nvme0n1 00:28:38.978 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.978 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.978 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.978 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.978 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.978 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.978 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.979 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.277 nvme0n1 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:39.277 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.278 07:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.538 nvme0n1 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.538 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.799 nvme0n1 00:28:39.799 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.799 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.799 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.799 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.799 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.799 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.799 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.799 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.799 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.799 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.061 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.322 nvme0n1 00:28:40.322 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.322 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.322 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.322 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.322 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.322 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.322 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.322 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.322 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.322 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.322 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.322 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.322 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.323 07:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.584 nvme0n1 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.584 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.585 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.846 nvme0n1 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.107 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.108 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:41.108 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.108 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.369 nvme0n1 00:28:41.369 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.369 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.369 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.369 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.369 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.369 07:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.369 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.942 nvme0n1 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.942 07:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.516 nvme0n1 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.516 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.517 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.517 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.517 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.517 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.517 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.517 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.517 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.517 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.517 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.517 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.517 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.517 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.901 nvme0n1 00:28:42.901 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.901 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.901 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.901 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.901 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.901 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.215 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.215 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.215 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.215 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.215 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.215 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.215 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:43.215 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.215 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.215 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.216 07:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.524 nvme0n1 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.524 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.097 nvme0n1 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU3YTJkMjg1Y2ZjYmY1ZWQyYWEzNWM5ZWIzYzljMDm4PrRV: 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: ]] 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTYxM2VhMWEzYzE5NzdjYjAxOGM0MTRlMDY5ODFkMmM2ZGEzYjdlMWM5MmJkYTRiOGE4ZWIwMjI5MDdmZTc3MU5mbaw=: 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.097 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.098 07:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.040 nvme0n1 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.040 07:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.984 nvme0n1 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:45.984 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.985 07:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.557 nvme0n1 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQ5NWE4YWE5MWIzZGNiOTE0M2ZjOTdlZTFkNWViZWY3ZWMwMjEwNzgzNDMxNDc4uuM8OQ==: 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: ]] 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmQyNWQ2ZmYzZjRiZGEyYjYwZjBlMDdhZDY3MTYxMWPlSbe+: 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.557 07:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.501 nvme0n1 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmFkOTYzYWQzNzU1MTg4Y2JkOTE4NWMyNmIwZGJhMmQyOTgwNjNlODlkNTE1YWQxYjc1ZDkwNDVhNGM1NjcxZA/rrJY=: 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.501 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.444 nvme0n1 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.445 request: 00:28:48.445 { 00:28:48.445 "name": "nvme0", 00:28:48.445 "trtype": "tcp", 00:28:48.445 "traddr": "10.0.0.1", 00:28:48.445 "adrfam": "ipv4", 00:28:48.445 "trsvcid": "4420", 00:28:48.445 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:48.445 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:48.445 "prchk_reftag": false, 00:28:48.445 "prchk_guard": false, 00:28:48.445 "hdgst": false, 00:28:48.445 "ddgst": false, 00:28:48.445 "allow_unrecognized_csi": false, 00:28:48.445 "method": "bdev_nvme_attach_controller", 00:28:48.445 "req_id": 1 00:28:48.445 } 00:28:48.445 Got JSON-RPC error response 00:28:48.445 response: 00:28:48.445 { 00:28:48.445 "code": -5, 00:28:48.445 "message": "Input/output error" 00:28:48.445 } 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.445 07:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:48.445 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.446 request: 00:28:48.446 { 00:28:48.446 "name": "nvme0", 00:28:48.446 "trtype": "tcp", 00:28:48.446 "traddr": "10.0.0.1", 00:28:48.446 "adrfam": "ipv4", 00:28:48.446 "trsvcid": "4420", 00:28:48.446 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:48.446 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:48.446 "prchk_reftag": false, 00:28:48.446 "prchk_guard": false, 00:28:48.446 "hdgst": false, 00:28:48.446 "ddgst": false, 00:28:48.446 "dhchap_key": "key2", 00:28:48.446 "allow_unrecognized_csi": false, 00:28:48.446 "method": "bdev_nvme_attach_controller", 00:28:48.446 "req_id": 1 00:28:48.446 } 00:28:48.446 Got JSON-RPC error response 00:28:48.446 response: 00:28:48.446 { 00:28:48.446 "code": -5, 00:28:48.446 "message": "Input/output error" 00:28:48.446 } 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.446 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.708 request: 00:28:48.708 { 00:28:48.708 "name": "nvme0", 00:28:48.708 "trtype": "tcp", 00:28:48.708 "traddr": "10.0.0.1", 00:28:48.708 "adrfam": "ipv4", 00:28:48.708 "trsvcid": "4420", 00:28:48.708 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:48.708 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:48.708 "prchk_reftag": false, 00:28:48.708 "prchk_guard": false, 00:28:48.708 "hdgst": false, 00:28:48.708 "ddgst": false, 00:28:48.708 "dhchap_key": "key1", 00:28:48.708 "dhchap_ctrlr_key": "ckey2", 00:28:48.708 "allow_unrecognized_csi": false, 00:28:48.708 "method": "bdev_nvme_attach_controller", 00:28:48.708 "req_id": 1 00:28:48.708 } 00:28:48.708 Got JSON-RPC error response 00:28:48.708 response: 00:28:48.708 { 00:28:48.708 "code": -5, 00:28:48.708 "message": "Input/output error" 00:28:48.708 } 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.708 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.708 nvme0n1 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.709 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.970 request: 00:28:48.970 { 00:28:48.970 "name": "nvme0", 00:28:48.970 "dhchap_key": "key1", 00:28:48.970 "dhchap_ctrlr_key": "ckey2", 00:28:48.970 "method": "bdev_nvme_set_keys", 00:28:48.970 "req_id": 1 00:28:48.970 } 00:28:48.970 Got JSON-RPC error response 00:28:48.970 response: 00:28:48.970 { 00:28:48.970 "code": -13, 00:28:48.970 "message": "Permission denied" 00:28:48.970 } 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:48.970 07:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:49.916 07:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.916 07:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:49.916 07:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.916 07:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.916 07:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.177 07:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:50.178 07:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM2Y2Y2M2UzNzNhMzk1ZTMxYzJlMjgwNzNiZGU1ZTJkYzhhMjQwYjI2NjNlMDJjkzPBTQ==: 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: ]] 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmM2NDVjNWEzYjkxOTUzOWRhYzYwNTc0YmQwOGNmMWIzNzAyOGYxYTZmYjAyZDE5pY2C/w==: 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.123 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.385 nvme0n1 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmY2ZTg1MjE1ZWE2NjUwZWU2MzYwMjYzOTdkNWZlYWLxqjAn: 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: ]] 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ3ZjAwNWE2NDVkNjA3ODAyMzQyZWUzZWQ2YjgxOWJQKbHY: 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.385 request: 00:28:51.385 { 00:28:51.385 "name": "nvme0", 00:28:51.385 "dhchap_key": "key2", 00:28:51.385 "dhchap_ctrlr_key": "ckey1", 00:28:51.385 "method": "bdev_nvme_set_keys", 00:28:51.385 "req_id": 1 00:28:51.385 } 00:28:51.385 Got JSON-RPC error response 00:28:51.385 response: 00:28:51.385 { 00:28:51.385 "code": -13, 00:28:51.385 "message": "Permission denied" 00:28:51.385 } 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.385 07:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.385 07:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.385 07:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:51.385 07:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:52.328 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.328 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:52.329 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.329 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.329 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.329 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:52.329 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:52.329 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:52.329 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:52.329 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:52.329 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:52.329 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:52.329 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:52.329 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:52.329 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:52.590 rmmod nvme_tcp 00:28:52.590 rmmod nvme_fabrics 00:28:52.590 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:52.590 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:52.590 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:52.590 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1449285 ']' 00:28:52.590 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1449285 00:28:52.590 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 1449285 ']' 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 1449285 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1449285 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1449285' 00:28:52.591 killing process with pid 1449285 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 1449285 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 1449285 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.591 07:29:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.138 07:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:55.138 07:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:55.138 07:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:55.138 07:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:55.138 07:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:55.138 07:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:55.138 07:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:55.138 07:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:55.139 07:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:55.139 07:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:55.139 07:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:55.139 07:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:55.139 07:29:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:59.348 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:59.348 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:59.348 07:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.vLw /tmp/spdk.key-null.234 /tmp/spdk.key-sha256.bF7 /tmp/spdk.key-sha384.o6f /tmp/spdk.key-sha512.06d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:59.348 07:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:03.563 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:03.563 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:03.563 00:29:03.563 real 1m5.863s 00:29:03.563 user 0m58.628s 00:29:03.563 sys 0m17.377s 00:29:03.563 07:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:03.563 07:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.563 ************************************ 00:29:03.563 END TEST nvmf_auth_host 00:29:03.563 ************************************ 00:29:03.563 07:29:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:03.563 07:29:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:03.563 07:29:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:03.563 07:29:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:03.563 07:29:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.563 ************************************ 00:29:03.563 START TEST nvmf_digest 00:29:03.563 ************************************ 00:29:03.563 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:03.825 * Looking for test storage... 00:29:03.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:03.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.825 --rc genhtml_branch_coverage=1 00:29:03.825 --rc genhtml_function_coverage=1 00:29:03.825 --rc genhtml_legend=1 00:29:03.825 --rc geninfo_all_blocks=1 00:29:03.825 --rc geninfo_unexecuted_blocks=1 00:29:03.825 00:29:03.825 ' 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:03.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.825 --rc genhtml_branch_coverage=1 00:29:03.825 --rc genhtml_function_coverage=1 00:29:03.825 --rc genhtml_legend=1 00:29:03.825 --rc geninfo_all_blocks=1 00:29:03.825 --rc geninfo_unexecuted_blocks=1 00:29:03.825 00:29:03.825 ' 00:29:03.825 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:03.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.825 --rc genhtml_branch_coverage=1 00:29:03.825 --rc genhtml_function_coverage=1 00:29:03.826 --rc genhtml_legend=1 00:29:03.826 --rc geninfo_all_blocks=1 00:29:03.826 --rc geninfo_unexecuted_blocks=1 00:29:03.826 00:29:03.826 ' 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:03.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.826 --rc genhtml_branch_coverage=1 00:29:03.826 --rc genhtml_function_coverage=1 00:29:03.826 --rc genhtml_legend=1 00:29:03.826 --rc geninfo_all_blocks=1 00:29:03.826 --rc geninfo_unexecuted_blocks=1 00:29:03.826 00:29:03.826 ' 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:03.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:03.826 07:29:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:11.967 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:11.967 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.967 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:11.968 Found net devices under 0000:31:00.0: cvl_0_0 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:11.968 Found net devices under 0000:31:00.1: cvl_0_1 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:11.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:29:11.968 00:29:11.968 --- 10.0.0.2 ping statistics --- 00:29:11.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.968 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:29:11.968 00:29:11.968 --- 10.0.0.1 ping statistics --- 00:29:11.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.968 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:11.968 ************************************ 00:29:11.968 START TEST nvmf_digest_clean 00:29:11.968 ************************************ 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1467816 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1467816 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1467816 ']' 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:11.968 07:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:11.968 [2024-11-20 07:29:46.676816] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:29:11.968 [2024-11-20 07:29:46.676888] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.237 [2024-11-20 07:29:46.767869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.237 [2024-11-20 07:29:46.808789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.237 [2024-11-20 07:29:46.808827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.237 [2024-11-20 07:29:46.808835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.237 [2024-11-20 07:29:46.808845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.237 [2024-11-20 07:29:46.808851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.237 [2024-11-20 07:29:46.809507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.807 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:12.807 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:12.807 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:12.807 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:12.807 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:12.807 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.807 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:12.807 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:12.807 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:12.807 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.807 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:13.068 null0 00:29:13.068 [2024-11-20 07:29:47.586978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.068 [2024-11-20 07:29:47.611201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1468104 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1468104 /var/tmp/bperf.sock 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1468104 ']' 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:13.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:13.068 07:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:13.068 [2024-11-20 07:29:47.669169] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:29:13.068 [2024-11-20 07:29:47.669221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468104 ] 00:29:13.068 [2024-11-20 07:29:47.762606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.068 [2024-11-20 07:29:47.798568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.010 07:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:14.010 07:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:14.010 07:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:14.010 07:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:14.010 07:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:14.010 07:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.010 07:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.270 nvme0n1 00:29:14.270 07:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:14.270 07:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:14.270 Running I/O for 2 seconds... 00:29:16.599 19496.00 IOPS, 76.16 MiB/s [2024-11-20T06:29:51.366Z] 19779.00 IOPS, 77.26 MiB/s 00:29:16.599 Latency(us) 00:29:16.599 [2024-11-20T06:29:51.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.599 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:16.599 nvme0n1 : 2.01 19784.17 77.28 0.00 0.00 6463.94 3044.69 15619.41 00:29:16.599 [2024-11-20T06:29:51.366Z] =================================================================================================================== 00:29:16.599 [2024-11-20T06:29:51.366Z] Total : 19784.17 77.28 0.00 0.00 6463.94 3044.69 15619.41 00:29:16.599 { 00:29:16.599 "results": [ 00:29:16.599 { 00:29:16.599 "job": "nvme0n1", 00:29:16.599 "core_mask": "0x2", 00:29:16.599 "workload": "randread", 00:29:16.599 "status": "finished", 00:29:16.599 "queue_depth": 128, 00:29:16.599 "io_size": 4096, 00:29:16.599 "runtime": 2.005947, 00:29:16.599 "iops": 19784.17176525601, 00:29:16.599 "mibps": 77.2819209580313, 00:29:16.599 "io_failed": 0, 00:29:16.599 "io_timeout": 0, 00:29:16.599 "avg_latency_us": 6463.940783147709, 00:29:16.599 "min_latency_us": 3044.693333333333, 00:29:16.599 "max_latency_us": 15619.413333333334 00:29:16.599 } 00:29:16.599 ], 00:29:16.599 "core_count": 1 00:29:16.599 } 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:16.599 | select(.opcode=="crc32c") 00:29:16.599 | "\(.module_name) \(.executed)"' 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1468104 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1468104 ']' 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1468104 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1468104 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1468104' 00:29:16.599 killing process with pid 1468104 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1468104 00:29:16.599 Received shutdown signal, test time was about 2.000000 seconds 00:29:16.599 00:29:16.599 Latency(us) 00:29:16.599 [2024-11-20T06:29:51.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.599 [2024-11-20T06:29:51.366Z] =================================================================================================================== 00:29:16.599 [2024-11-20T06:29:51.366Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1468104 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:16.599 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:16.860 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1468790 00:29:16.860 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1468790 /var/tmp/bperf.sock 00:29:16.860 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1468790 ']' 00:29:16.860 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:16.860 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:16.860 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:16.860 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:16.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:16.860 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:16.860 07:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:16.860 [2024-11-20 07:29:51.411187] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:29:16.860 [2024-11-20 07:29:51.411245] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468790 ] 00:29:16.860 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:16.860 Zero copy mechanism will not be used. 00:29:16.860 [2024-11-20 07:29:51.500993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.860 [2024-11-20 07:29:51.530994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.432 07:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:17.432 07:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:17.432 07:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:17.432 07:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:17.432 07:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:17.693 07:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:17.693 07:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.263 nvme0n1 00:29:18.263 07:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:18.263 07:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:18.263 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:18.263 Zero copy mechanism will not be used. 00:29:18.263 Running I/O for 2 seconds... 00:29:20.146 3174.00 IOPS, 396.75 MiB/s [2024-11-20T06:29:54.913Z] 3168.00 IOPS, 396.00 MiB/s 00:29:20.146 Latency(us) 00:29:20.146 [2024-11-20T06:29:54.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.146 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:20.146 nvme0n1 : 2.00 3169.13 396.14 0.00 0.00 5046.14 815.79 15510.19 00:29:20.146 [2024-11-20T06:29:54.913Z] =================================================================================================================== 00:29:20.146 [2024-11-20T06:29:54.913Z] Total : 3169.13 396.14 0.00 0.00 5046.14 815.79 15510.19 00:29:20.146 { 00:29:20.146 "results": [ 00:29:20.146 { 00:29:20.146 "job": "nvme0n1", 00:29:20.146 "core_mask": "0x2", 00:29:20.146 "workload": "randread", 00:29:20.146 "status": "finished", 00:29:20.146 "queue_depth": 16, 00:29:20.146 "io_size": 131072, 00:29:20.146 "runtime": 2.004334, 00:29:20.146 "iops": 3169.132489894399, 00:29:20.146 "mibps": 396.1415612367999, 00:29:20.146 "io_failed": 0, 00:29:20.146 "io_timeout": 0, 00:29:20.146 "avg_latency_us": 5046.136423173803, 00:29:20.146 "min_latency_us": 815.7866666666666, 00:29:20.146 "max_latency_us": 15510.186666666666 00:29:20.146 } 00:29:20.146 ], 00:29:20.146 "core_count": 1 00:29:20.146 } 00:29:20.146 07:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:20.146 07:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:20.147 07:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:20.147 07:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:20.147 | select(.opcode=="crc32c") 00:29:20.147 | "\(.module_name) \(.executed)"' 00:29:20.147 07:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1468790 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1468790 ']' 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1468790 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1468790 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1468790' 00:29:20.407 killing process with pid 1468790 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1468790 00:29:20.407 Received shutdown signal, test time was about 2.000000 seconds 00:29:20.407 00:29:20.407 Latency(us) 00:29:20.407 [2024-11-20T06:29:55.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.407 [2024-11-20T06:29:55.174Z] =================================================================================================================== 00:29:20.407 [2024-11-20T06:29:55.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:20.407 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1468790 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1469476 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1469476 /var/tmp/bperf.sock 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1469476 ']' 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:20.668 07:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:20.668 [2024-11-20 07:29:55.308115] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:29:20.668 [2024-11-20 07:29:55.308184] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469476 ] 00:29:20.668 [2024-11-20 07:29:55.399169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.668 [2024-11-20 07:29:55.428048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.610 07:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:21.610 07:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:21.610 07:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:21.610 07:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:21.610 07:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:21.610 07:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.610 07:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.869 nvme0n1 00:29:21.869 07:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:21.869 07:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.146 Running I/O for 2 seconds... 00:29:24.029 21612.00 IOPS, 84.42 MiB/s [2024-11-20T06:29:58.796Z] 21681.50 IOPS, 84.69 MiB/s 00:29:24.029 Latency(us) 00:29:24.029 [2024-11-20T06:29:58.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.029 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:24.029 nvme0n1 : 2.00 21691.72 84.73 0.00 0.00 5895.94 2198.19 10431.15 00:29:24.029 [2024-11-20T06:29:58.796Z] =================================================================================================================== 00:29:24.029 [2024-11-20T06:29:58.796Z] Total : 21691.72 84.73 0.00 0.00 5895.94 2198.19 10431.15 00:29:24.029 { 00:29:24.029 "results": [ 00:29:24.029 { 00:29:24.029 "job": "nvme0n1", 00:29:24.029 "core_mask": "0x2", 00:29:24.029 "workload": "randwrite", 00:29:24.029 "status": "finished", 00:29:24.030 "queue_depth": 128, 00:29:24.030 "io_size": 4096, 00:29:24.030 "runtime": 2.004959, 00:29:24.030 "iops": 21691.715391686314, 00:29:24.030 "mibps": 84.73326324877466, 00:29:24.030 "io_failed": 0, 00:29:24.030 "io_timeout": 0, 00:29:24.030 "avg_latency_us": 5895.935876388218, 00:29:24.030 "min_latency_us": 2198.1866666666665, 00:29:24.030 "max_latency_us": 10431.146666666667 00:29:24.030 } 00:29:24.030 ], 00:29:24.030 "core_count": 1 00:29:24.030 } 00:29:24.030 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:24.030 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:24.030 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:24.030 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:24.030 | select(.opcode=="crc32c") 00:29:24.030 | "\(.module_name) \(.executed)"' 00:29:24.030 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1469476 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1469476 ']' 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1469476 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1469476 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1469476' 00:29:24.290 killing process with pid 1469476 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1469476 00:29:24.290 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.290 00:29:24.290 Latency(us) 00:29:24.290 [2024-11-20T06:29:59.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.290 [2024-11-20T06:29:59.057Z] =================================================================================================================== 00:29:24.290 [2024-11-20T06:29:59.057Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.290 07:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1469476 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1470209 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1470209 /var/tmp/bperf.sock 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1470209 ']' 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:24.290 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:24.551 [2024-11-20 07:29:59.093178] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:29:24.551 [2024-11-20 07:29:59.093238] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470209 ] 00:29:24.551 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:24.551 Zero copy mechanism will not be used. 00:29:24.551 [2024-11-20 07:29:59.183260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.551 [2024-11-20 07:29:59.213037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.122 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:25.122 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:25.122 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:25.122 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:25.122 07:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:25.382 07:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.382 07:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.642 nvme0n1 00:29:25.642 07:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:25.642 07:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:25.902 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:25.902 Zero copy mechanism will not be used. 00:29:25.902 Running I/O for 2 seconds... 00:29:27.786 5381.00 IOPS, 672.62 MiB/s [2024-11-20T06:30:02.553Z] 5034.00 IOPS, 629.25 MiB/s 00:29:27.786 Latency(us) 00:29:27.786 [2024-11-20T06:30:02.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.787 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:27.787 nvme0n1 : 2.00 5032.32 629.04 0.00 0.00 3174.47 1645.23 7918.93 00:29:27.787 [2024-11-20T06:30:02.554Z] =================================================================================================================== 00:29:27.787 [2024-11-20T06:30:02.554Z] Total : 5032.32 629.04 0.00 0.00 3174.47 1645.23 7918.93 00:29:27.787 { 00:29:27.787 "results": [ 00:29:27.787 { 00:29:27.787 "job": "nvme0n1", 00:29:27.787 "core_mask": "0x2", 00:29:27.787 "workload": "randwrite", 00:29:27.787 "status": "finished", 00:29:27.787 "queue_depth": 16, 00:29:27.787 "io_size": 131072, 00:29:27.787 "runtime": 2.004643, 00:29:27.787 "iops": 5032.31747498183, 00:29:27.787 "mibps": 629.0396843727287, 00:29:27.787 "io_failed": 0, 00:29:27.787 "io_timeout": 0, 00:29:27.787 "avg_latency_us": 3174.469362939466, 00:29:27.787 "min_latency_us": 1645.2266666666667, 00:29:27.787 "max_latency_us": 7918.933333333333 00:29:27.787 } 00:29:27.787 ], 00:29:27.787 "core_count": 1 00:29:27.787 } 00:29:27.787 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:27.787 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:27.787 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:27.787 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:27.787 | select(.opcode=="crc32c") 00:29:27.787 | "\(.module_name) \(.executed)"' 00:29:27.787 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1470209 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1470209 ']' 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1470209 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1470209 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1470209' 00:29:28.048 killing process with pid 1470209 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1470209 00:29:28.048 Received shutdown signal, test time was about 2.000000 seconds 00:29:28.048 00:29:28.048 Latency(us) 00:29:28.048 [2024-11-20T06:30:02.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.048 [2024-11-20T06:30:02.815Z] =================================================================================================================== 00:29:28.048 [2024-11-20T06:30:02.815Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1470209 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1467816 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1467816 ']' 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1467816 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:28.048 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:28.309 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1467816 00:29:28.309 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:28.309 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:28.309 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1467816' 00:29:28.309 killing process with pid 1467816 00:29:28.309 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1467816 00:29:28.310 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1467816 00:29:28.310 00:29:28.310 real 0m16.382s 00:29:28.310 user 0m32.478s 00:29:28.310 sys 0m3.510s 00:29:28.310 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:28.310 07:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:28.310 ************************************ 00:29:28.310 END TEST nvmf_digest_clean 00:29:28.310 ************************************ 00:29:28.310 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:28.310 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:28.310 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:28.310 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:28.570 ************************************ 00:29:28.570 START TEST nvmf_digest_error 00:29:28.570 ************************************ 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1471232 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1471232 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1471232 ']' 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:28.570 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.570 [2024-11-20 07:30:03.136322] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:29:28.570 [2024-11-20 07:30:03.136370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.570 [2024-11-20 07:30:03.219720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.570 [2024-11-20 07:30:03.254222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.570 [2024-11-20 07:30:03.254254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.570 [2024-11-20 07:30:03.254262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.570 [2024-11-20 07:30:03.254268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.570 [2024-11-20 07:30:03.254274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.570 [2024-11-20 07:30:03.254871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:29.512 [2024-11-20 07:30:03.964900] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.512 07:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:29.512 null0 00:29:29.512 [2024-11-20 07:30:04.048479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.512 [2024-11-20 07:30:04.072691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1471324 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1471324 /var/tmp/bperf.sock 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1471324 ']' 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:29.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:29.512 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:29.513 [2024-11-20 07:30:04.129854] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:29:29.513 [2024-11-20 07:30:04.129907] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471324 ] 00:29:29.513 [2024-11-20 07:30:04.220433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.513 [2024-11-20 07:30:04.250618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.456 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:30.456 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:30.456 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:30.456 07:30:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:30.456 07:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:30.456 07:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.456 07:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.456 07:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.456 07:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.456 07:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:31.028 nvme0n1 00:29:31.028 07:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:31.028 07:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.028 07:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:31.028 07:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.028 07:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:31.028 07:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:31.028 Running I/O for 2 seconds... 00:29:31.028 [2024-11-20 07:30:05.642850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.028 [2024-11-20 07:30:05.642886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.028 [2024-11-20 07:30:05.642895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.028 [2024-11-20 07:30:05.653835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.028 [2024-11-20 07:30:05.653855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.028 [2024-11-20 07:30:05.653865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.028 [2024-11-20 07:30:05.666196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.028 [2024-11-20 07:30:05.666215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.028 [2024-11-20 07:30:05.666222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.028 [2024-11-20 07:30:05.679776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.028 [2024-11-20 07:30:05.679795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.028 [2024-11-20 07:30:05.679809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.028 [2024-11-20 07:30:05.690061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.028 [2024-11-20 07:30:05.690079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.028 [2024-11-20 07:30:05.690086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.028 [2024-11-20 07:30:05.703470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.029 [2024-11-20 07:30:05.703488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.029 [2024-11-20 07:30:05.703495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.029 [2024-11-20 07:30:05.716957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.029 [2024-11-20 07:30:05.716975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.029 [2024-11-20 07:30:05.716982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.029 [2024-11-20 07:30:05.730035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.029 [2024-11-20 07:30:05.730053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.029 [2024-11-20 07:30:05.730060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.029 [2024-11-20 07:30:05.740360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.029 [2024-11-20 07:30:05.740377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.029 [2024-11-20 07:30:05.740384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.029 [2024-11-20 07:30:05.753146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.029 [2024-11-20 07:30:05.753164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.029 [2024-11-20 07:30:05.753171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.029 [2024-11-20 07:30:05.766695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.029 [2024-11-20 07:30:05.766713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.029 [2024-11-20 07:30:05.766719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.029 [2024-11-20 07:30:05.780951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.029 [2024-11-20 07:30:05.780969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.029 [2024-11-20 07:30:05.780976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.289 [2024-11-20 07:30:05.793695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.289 [2024-11-20 07:30:05.793716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.289 [2024-11-20 07:30:05.793723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.289 [2024-11-20 07:30:05.805520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.289 [2024-11-20 07:30:05.805537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.289 [2024-11-20 07:30:05.805544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.289 [2024-11-20 07:30:05.818426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.289 [2024-11-20 07:30:05.818443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.289 [2024-11-20 07:30:05.818450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.289 [2024-11-20 07:30:05.831394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.289 [2024-11-20 07:30:05.831411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.289 [2024-11-20 07:30:05.831418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.289 [2024-11-20 07:30:05.843141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.289 [2024-11-20 07:30:05.843159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.289 [2024-11-20 07:30:05.843165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.289 [2024-11-20 07:30:05.855927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.289 [2024-11-20 07:30:05.855944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.289 [2024-11-20 07:30:05.855951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:05.869117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:05.869135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:05.869142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:05.881985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:05.882002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:05.882009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:05.891855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:05.891876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:05.891883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:05.904596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:05.904614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:05.904620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:05.918980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:05.918998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:05.919005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:05.931309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:05.931327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:05.931333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:05.943786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:05.943804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:05.943810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:05.957408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:05.957426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:05.957433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:05.970183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:05.970200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:05.970206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:05.981309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:05.981326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:05.981333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:05.994004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:05.994022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:05.994028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:06.007135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:06.007153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:06.007163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:06.019049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:06.019067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:06.019073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:06.032562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:06.032580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:06.032588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.290 [2024-11-20 07:30:06.042788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.290 [2024-11-20 07:30:06.042806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.290 [2024-11-20 07:30:06.042812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.056342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.056361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.056367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.068733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.068750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.068757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.082005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.082022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.082029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.095073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.095092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.095098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.107532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.107550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.107557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.119747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.119765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.119771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.130482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.130499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.130506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.143583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.143602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.143608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.157587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.157605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.157612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.169495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.169512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.169519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.182292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.182310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.182317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.195738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.195755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.195762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.208691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.208709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.208715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.221477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.221495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.552 [2024-11-20 07:30:06.221505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.552 [2024-11-20 07:30:06.231143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.552 [2024-11-20 07:30:06.231161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.553 [2024-11-20 07:30:06.231168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.553 [2024-11-20 07:30:06.244222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.553 [2024-11-20 07:30:06.244240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.553 [2024-11-20 07:30:06.244247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.553 [2024-11-20 07:30:06.257041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.553 [2024-11-20 07:30:06.257059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.553 [2024-11-20 07:30:06.257065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.553 [2024-11-20 07:30:06.270140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.553 [2024-11-20 07:30:06.270157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.553 [2024-11-20 07:30:06.270164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.553 [2024-11-20 07:30:06.284021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.553 [2024-11-20 07:30:06.284039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.553 [2024-11-20 07:30:06.284045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.553 [2024-11-20 07:30:06.296452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.553 [2024-11-20 07:30:06.296470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.553 [2024-11-20 07:30:06.296477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.553 [2024-11-20 07:30:06.307806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.553 [2024-11-20 07:30:06.307825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.553 [2024-11-20 07:30:06.307831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.815 [2024-11-20 07:30:06.320888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.815 [2024-11-20 07:30:06.320906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.815 [2024-11-20 07:30:06.320912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.815 [2024-11-20 07:30:06.332893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.815 [2024-11-20 07:30:06.332915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.815 [2024-11-20 07:30:06.332922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.815 [2024-11-20 07:30:06.346267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.815 [2024-11-20 07:30:06.346285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.815 [2024-11-20 07:30:06.346291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.815 [2024-11-20 07:30:06.358199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.815 [2024-11-20 07:30:06.358216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.815 [2024-11-20 07:30:06.358223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.815 [2024-11-20 07:30:06.370305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.815 [2024-11-20 07:30:06.370322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.815 [2024-11-20 07:30:06.370328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.815 [2024-11-20 07:30:06.383341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.815 [2024-11-20 07:30:06.383358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.815 [2024-11-20 07:30:06.383365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.815 [2024-11-20 07:30:06.395933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.815 [2024-11-20 07:30:06.395951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.815 [2024-11-20 07:30:06.395957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.815 [2024-11-20 07:30:06.408448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.408465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.408472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.816 [2024-11-20 07:30:06.421193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.421211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.421218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.816 [2024-11-20 07:30:06.434310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.434328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.434335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.816 [2024-11-20 07:30:06.446308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.446325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.446332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.816 [2024-11-20 07:30:06.458539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.458557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.458564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.816 [2024-11-20 07:30:06.470874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.470892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.470899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.816 [2024-11-20 07:30:06.484282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.484299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.484306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.816 [2024-11-20 07:30:06.496106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.496124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.496131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.816 [2024-11-20 07:30:06.508270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.508288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.508294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.816 [2024-11-20 07:30:06.521967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.521985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.521991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.816 [2024-11-20 07:30:06.533608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.533626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.533633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.816 [2024-11-20 07:30:06.546406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.546424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.546434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.816 [2024-11-20 07:30:06.559560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.559579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.559586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.816 [2024-11-20 07:30:06.572993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:31.816 [2024-11-20 07:30:06.573011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.816 [2024-11-20 07:30:06.573018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.584419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.584436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.584443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.597991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.598009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.598016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.609314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.609333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.609340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 20131.00 IOPS, 78.64 MiB/s [2024-11-20T06:30:06.844Z] [2024-11-20 07:30:06.623785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.623802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.623809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.634849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.634870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.634877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.647247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.647265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.647272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.662129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.662146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.662153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.671660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.671678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.671685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.687043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.687061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.687068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.701692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.701709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.701716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.713807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.713825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.713832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.725845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.725867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.725873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.737310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.737328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.737334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.749604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.749622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.749629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.763366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.763385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.763394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.776001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.776019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.776026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.789272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.789289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.789296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.800150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.800168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.800175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.812340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.812357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.812364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.827060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.827078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.827084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.077 [2024-11-20 07:30:06.840300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.077 [2024-11-20 07:30:06.840318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.077 [2024-11-20 07:30:06.840325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 07:30:06.852403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.338 [2024-11-20 07:30:06.852420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 07:30:06.852426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 07:30:06.863471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.338 [2024-11-20 07:30:06.863489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 07:30:06.863496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 07:30:06.876475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.338 [2024-11-20 07:30:06.876496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 07:30:06.876502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 07:30:06.890731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.338 [2024-11-20 07:30:06.890749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 07:30:06.890756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 07:30:06.900995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.338 [2024-11-20 07:30:06.901012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 07:30:06.901019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 07:30:06.914680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.338 [2024-11-20 07:30:06.914698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 07:30:06.914705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 07:30:06.927352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.338 [2024-11-20 07:30:06.927370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 07:30:06.927376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 07:30:06.940359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.338 [2024-11-20 07:30:06.940377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 07:30:06.940384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 07:30:06.953225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.338 [2024-11-20 07:30:06.953242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 07:30:06.953248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 07:30:06.965030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.338 [2024-11-20 07:30:06.965048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 07:30:06.965055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.339 [2024-11-20 07:30:06.978101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.339 [2024-11-20 07:30:06.978119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.339 [2024-11-20 07:30:06.978125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.339 [2024-11-20 07:30:06.988631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.339 [2024-11-20 07:30:06.988649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.339 [2024-11-20 07:30:06.988656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.339 [2024-11-20 07:30:07.001430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.339 [2024-11-20 07:30:07.001448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.339 [2024-11-20 07:30:07.001454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.339 [2024-11-20 07:30:07.015139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.339 [2024-11-20 07:30:07.015156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.339 [2024-11-20 07:30:07.015163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.339 [2024-11-20 07:30:07.028221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.339 [2024-11-20 07:30:07.028239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.339 [2024-11-20 07:30:07.028246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.339 [2024-11-20 07:30:07.040683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.339 [2024-11-20 07:30:07.040701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.339 [2024-11-20 07:30:07.040707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.339 [2024-11-20 07:30:07.051404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.339 [2024-11-20 07:30:07.051422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.339 [2024-11-20 07:30:07.051428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.339 [2024-11-20 07:30:07.064694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.339 [2024-11-20 07:30:07.064713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.339 [2024-11-20 07:30:07.064719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.339 [2024-11-20 07:30:07.078439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.339 [2024-11-20 07:30:07.078457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.339 [2024-11-20 07:30:07.078464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.339 [2024-11-20 07:30:07.091435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.339 [2024-11-20 07:30:07.091453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.339 [2024-11-20 07:30:07.091462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.103050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.103068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.103075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.114988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.115005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.115011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.127464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.127481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.127488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.140211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.140229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.140236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.152396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.152414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.152420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.165405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.165422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.165429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.178970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.178987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.178993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.190455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.190472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.190479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.200788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.200806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.200813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.215210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.215228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.215234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.228623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.228640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.228647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.241325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.241343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.241349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.252684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.252701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.252708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.266118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.266136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.266143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.279073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.279090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.600 [2024-11-20 07:30:07.279097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.600 [2024-11-20 07:30:07.290600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.600 [2024-11-20 07:30:07.290617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.601 [2024-11-20 07:30:07.290624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.601 [2024-11-20 07:30:07.303781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.601 [2024-11-20 07:30:07.303798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.601 [2024-11-20 07:30:07.303811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.601 [2024-11-20 07:30:07.316100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.601 [2024-11-20 07:30:07.316118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.601 [2024-11-20 07:30:07.316125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.601 [2024-11-20 07:30:07.329331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.601 [2024-11-20 07:30:07.329348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.601 [2024-11-20 07:30:07.329354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.601 [2024-11-20 07:30:07.341026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.601 [2024-11-20 07:30:07.341043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.601 [2024-11-20 07:30:07.341050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.601 [2024-11-20 07:30:07.354429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.601 [2024-11-20 07:30:07.354446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.601 [2024-11-20 07:30:07.354453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.861 [2024-11-20 07:30:07.365557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.861 [2024-11-20 07:30:07.365574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.861 [2024-11-20 07:30:07.365581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.861 [2024-11-20 07:30:07.378197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.861 [2024-11-20 07:30:07.378215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.861 [2024-11-20 07:30:07.378221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.861 [2024-11-20 07:30:07.390958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.861 [2024-11-20 07:30:07.390975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.861 [2024-11-20 07:30:07.390982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.861 [2024-11-20 07:30:07.404497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.861 [2024-11-20 07:30:07.404514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.861 [2024-11-20 07:30:07.404521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.861 [2024-11-20 07:30:07.416027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.861 [2024-11-20 07:30:07.416047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.861 [2024-11-20 07:30:07.416053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.861 [2024-11-20 07:30:07.428001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.861 [2024-11-20 07:30:07.428019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.428025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.439397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.439415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.439421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.452382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.452400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.452407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.465322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.465340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.465346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.478894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.478912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.478919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.490292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.490309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.490316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.502067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.502085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.502092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.514323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.514341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.514348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.528529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.528547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.528553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.541404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.541422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.541429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.554031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.554048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.554055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.565472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.565489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.565496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.579850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.579872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.579878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.590473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.590491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.590498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.602888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.602905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.602912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.862 [2024-11-20 07:30:07.614369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:32.862 [2024-11-20 07:30:07.614386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.862 [2024-11-20 07:30:07.614393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.123 20229.00 IOPS, 79.02 MiB/s [2024-11-20T06:30:07.890Z] [2024-11-20 07:30:07.627248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2554f80) 00:29:33.123 [2024-11-20 07:30:07.627266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.123 [2024-11-20 07:30:07.627278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.123 00:29:33.123 Latency(us) 00:29:33.123 [2024-11-20T06:30:07.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.123 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:33.123 nvme0n1 : 2.00 20249.67 79.10 0.00 0.00 6313.87 2402.99 17585.49 00:29:33.123 [2024-11-20T06:30:07.890Z] =================================================================================================================== 00:29:33.123 [2024-11-20T06:30:07.890Z] Total : 20249.67 79.10 0.00 0.00 6313.87 2402.99 17585.49 00:29:33.123 { 00:29:33.123 "results": [ 00:29:33.123 { 00:29:33.123 "job": "nvme0n1", 00:29:33.123 "core_mask": "0x2", 00:29:33.123 "workload": "randread", 00:29:33.123 "status": "finished", 00:29:33.123 "queue_depth": 128, 00:29:33.123 "io_size": 4096, 00:29:33.123 "runtime": 2.00428, 00:29:33.123 "iops": 20249.66571536911, 00:29:33.123 "mibps": 79.10025670066058, 00:29:33.123 "io_failed": 0, 00:29:33.123 "io_timeout": 0, 00:29:33.123 "avg_latency_us": 6313.874264688973, 00:29:33.123 "min_latency_us": 2402.9866666666667, 00:29:33.123 "max_latency_us": 17585.493333333332 00:29:33.123 } 00:29:33.123 ], 00:29:33.123 "core_count": 1 00:29:33.123 } 00:29:33.123 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:33.123 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:33.123 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:33.123 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:33.123 | .driver_specific 00:29:33.123 | .nvme_error 00:29:33.123 | .status_code 00:29:33.123 | .command_transient_transport_error' 00:29:33.123 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:29:33.123 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1471324 00:29:33.123 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1471324 ']' 00:29:33.123 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1471324 00:29:33.123 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:33.123 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:33.123 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1471324 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1471324' 00:29:33.384 killing process with pid 1471324 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1471324 00:29:33.384 Received shutdown signal, test time was about 2.000000 seconds 00:29:33.384 00:29:33.384 Latency(us) 00:29:33.384 [2024-11-20T06:30:08.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.384 [2024-11-20T06:30:08.151Z] =================================================================================================================== 00:29:33.384 [2024-11-20T06:30:08.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1471324 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1472079 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1472079 /var/tmp/bperf.sock 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1472079 ']' 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:33.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:33.384 07:30:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:33.384 [2024-11-20 07:30:08.056248] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:29:33.384 [2024-11-20 07:30:08.056310] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472079 ] 00:29:33.384 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:33.384 Zero copy mechanism will not be used. 00:29:33.384 [2024-11-20 07:30:08.143368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.644 [2024-11-20 07:30:08.173045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.214 07:30:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:34.214 07:30:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:34.214 07:30:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:34.214 07:30:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:34.474 07:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:34.474 07:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.474 07:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.474 07:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.474 07:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.474 07:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.734 nvme0n1 00:29:34.734 07:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:34.734 07:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.734 07:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.734 07:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.734 07:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:34.734 07:30:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:34.734 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:34.734 Zero copy mechanism will not be used. 00:29:34.734 Running I/O for 2 seconds... 00:29:34.734 [2024-11-20 07:30:09.363723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.734 [2024-11-20 07:30:09.363756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.734 [2024-11-20 07:30:09.363766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.734 [2024-11-20 07:30:09.373789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.734 [2024-11-20 07:30:09.373813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.734 [2024-11-20 07:30:09.373820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.734 [2024-11-20 07:30:09.384299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.734 [2024-11-20 07:30:09.384321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.734 [2024-11-20 07:30:09.384328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.734 [2024-11-20 07:30:09.394484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.734 [2024-11-20 07:30:09.394504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.734 [2024-11-20 07:30:09.394511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.734 [2024-11-20 07:30:09.404918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.734 [2024-11-20 07:30:09.404938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.734 [2024-11-20 07:30:09.404945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.734 [2024-11-20 07:30:09.417731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.735 [2024-11-20 07:30:09.417751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.735 [2024-11-20 07:30:09.417758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.735 [2024-11-20 07:30:09.429432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.735 [2024-11-20 07:30:09.429451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.735 [2024-11-20 07:30:09.429458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.735 [2024-11-20 07:30:09.440397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.735 [2024-11-20 07:30:09.440417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.735 [2024-11-20 07:30:09.440423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.735 [2024-11-20 07:30:09.449008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.735 [2024-11-20 07:30:09.449028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.735 [2024-11-20 07:30:09.449035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.735 [2024-11-20 07:30:09.455430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.735 [2024-11-20 07:30:09.455448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.735 [2024-11-20 07:30:09.455455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.735 [2024-11-20 07:30:09.465112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.735 [2024-11-20 07:30:09.465131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.735 [2024-11-20 07:30:09.465139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.735 [2024-11-20 07:30:09.475743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.735 [2024-11-20 07:30:09.475763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.735 [2024-11-20 07:30:09.475770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.735 [2024-11-20 07:30:09.486694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.735 [2024-11-20 07:30:09.486713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.735 [2024-11-20 07:30:09.486720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.735 [2024-11-20 07:30:09.497906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.735 [2024-11-20 07:30:09.497925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.735 [2024-11-20 07:30:09.497932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.995 [2024-11-20 07:30:09.507651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.995 [2024-11-20 07:30:09.507671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.995 [2024-11-20 07:30:09.507678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.995 [2024-11-20 07:30:09.515794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.995 [2024-11-20 07:30:09.515813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.995 [2024-11-20 07:30:09.515823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.995 [2024-11-20 07:30:09.524716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.995 [2024-11-20 07:30:09.524735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.995 [2024-11-20 07:30:09.524742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.995 [2024-11-20 07:30:09.536309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.995 [2024-11-20 07:30:09.536327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.995 [2024-11-20 07:30:09.536335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.995 [2024-11-20 07:30:09.546310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.995 [2024-11-20 07:30:09.546330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.995 [2024-11-20 07:30:09.546336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.995 [2024-11-20 07:30:09.557129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.995 [2024-11-20 07:30:09.557149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.995 [2024-11-20 07:30:09.557155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.995 [2024-11-20 07:30:09.568267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.568286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.568293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.578385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.578404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.578411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.589535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.589555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.589561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.599219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.599239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.599246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.606194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.606216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.606223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.617707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.617726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.617733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.624912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.624930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.624937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.630503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.630523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.630529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.636254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.636273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.636280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.642310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.642329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.642335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.650006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.650025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.650032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.662388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.662408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.662415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.674249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.674269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.674275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.683102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.683121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.683127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.693451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.693470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.693477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.703663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.703683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.703690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.711043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.711062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.711069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.721340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.721359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.721365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.728928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.728948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.728955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.738350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.738370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.738376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.749790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.749810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.749817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:34.996 [2024-11-20 07:30:09.759282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:34.996 [2024-11-20 07:30:09.759301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.996 [2024-11-20 07:30:09.759311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.257 [2024-11-20 07:30:09.767430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.257 [2024-11-20 07:30:09.767449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.257 [2024-11-20 07:30:09.767456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.257 [2024-11-20 07:30:09.777698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.257 [2024-11-20 07:30:09.777718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.257 [2024-11-20 07:30:09.777725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.257 [2024-11-20 07:30:09.784238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.257 [2024-11-20 07:30:09.784260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.257 [2024-11-20 07:30:09.784268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.257 [2024-11-20 07:30:09.793749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.793769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.793776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.803426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.803447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.803454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.811856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.811881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.811887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.822112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.822131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.822138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.831953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.831973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.831980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.842825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.842844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.842852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.853010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.853029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.853036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.864005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.864025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.864032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.875411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.875431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.875439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.883647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.883667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.883674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.893048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.893067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.893074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.902675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.902694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.902701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.913481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.913501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.913507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.922733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.922753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.922762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.935064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.935083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.935090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.947830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.947849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.947855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.960986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.961005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.961011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.973192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.973212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.973218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.985439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.985458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.985464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:09.997277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:09.997295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:09.997302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:10.008921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:10.008944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:10.008950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.258 [2024-11-20 07:30:10.019632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.258 [2024-11-20 07:30:10.019652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.258 [2024-11-20 07:30:10.019660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.520 [2024-11-20 07:30:10.030051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.520 [2024-11-20 07:30:10.030076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.520 [2024-11-20 07:30:10.030083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.520 [2024-11-20 07:30:10.042050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.520 [2024-11-20 07:30:10.042071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.520 [2024-11-20 07:30:10.042077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.520 [2024-11-20 07:30:10.054849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.520 [2024-11-20 07:30:10.054873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.520 [2024-11-20 07:30:10.054880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.520 [2024-11-20 07:30:10.063416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.520 [2024-11-20 07:30:10.063436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.520 [2024-11-20 07:30:10.063442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.520 [2024-11-20 07:30:10.069465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.520 [2024-11-20 07:30:10.069484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.520 [2024-11-20 07:30:10.069491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.520 [2024-11-20 07:30:10.076171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.520 [2024-11-20 07:30:10.076191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.520 [2024-11-20 07:30:10.076197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.520 [2024-11-20 07:30:10.082975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.520 [2024-11-20 07:30:10.082995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.520 [2024-11-20 07:30:10.083002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.520 [2024-11-20 07:30:10.091418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.520 [2024-11-20 07:30:10.091438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.520 [2024-11-20 07:30:10.091445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.520 [2024-11-20 07:30:10.102741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.520 [2024-11-20 07:30:10.102761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.520 [2024-11-20 07:30:10.102768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.520 [2024-11-20 07:30:10.113677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.520 [2024-11-20 07:30:10.113697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.520 [2024-11-20 07:30:10.113704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.520 [2024-11-20 07:30:10.124757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.520 [2024-11-20 07:30:10.124777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.520 [2024-11-20 07:30:10.124784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.520 [2024-11-20 07:30:10.135451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.520 [2024-11-20 07:30:10.135471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.520 [2024-11-20 07:30:10.135478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.520 [2024-11-20 07:30:10.145056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.145076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.145082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.521 [2024-11-20 07:30:10.156199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.156219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.156226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.521 [2024-11-20 07:30:10.166220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.166240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.166246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.521 [2024-11-20 07:30:10.177091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.177110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.177116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.521 [2024-11-20 07:30:10.187545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.187564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.187571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.521 [2024-11-20 07:30:10.198523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.198542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.198552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.521 [2024-11-20 07:30:10.204185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.204204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.204210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.521 [2024-11-20 07:30:10.209825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.209844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.209851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.521 [2024-11-20 07:30:10.215106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.215125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.215131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.521 [2024-11-20 07:30:10.225321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.225340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.225346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.521 [2024-11-20 07:30:10.237471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.237490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.237497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.521 [2024-11-20 07:30:10.250231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.250251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.250258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.521 [2024-11-20 07:30:10.263416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.263436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.263442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.521 [2024-11-20 07:30:10.277211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.521 [2024-11-20 07:30:10.277231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.521 [2024-11-20 07:30:10.277238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.782 [2024-11-20 07:30:10.287083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.782 [2024-11-20 07:30:10.287103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.782 [2024-11-20 07:30:10.287110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.782 [2024-11-20 07:30:10.297561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.297581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.297588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.307241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.307260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.307266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.319925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.319944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.319951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.333087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.333107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.333114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.345669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.345687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.345694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.357978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.357998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.358005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.783 3077.00 IOPS, 384.62 MiB/s [2024-11-20T06:30:10.550Z] [2024-11-20 07:30:10.371397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.371417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.371423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.384001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.384021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.384031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.395228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.395247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.395254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.406330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.406350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.406357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.414933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.414952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.414959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.420490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.420509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.420516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.431968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.431987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.431994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.443274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.443294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.443300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.453657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.453676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.453683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.465180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.465198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.465205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.477831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.477853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.477859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.489740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.489759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.489766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.500702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.500721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.500727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.510259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.510277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.510284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.521199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.521217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.521224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.531571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.531590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.531596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.783 [2024-11-20 07:30:10.543037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:35.783 [2024-11-20 07:30:10.543056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.783 [2024-11-20 07:30:10.543063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.552733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.552752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.552759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.557532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.557552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.557559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.569311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.569329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.569336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.578006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.578025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.578031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.588917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.588936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.588942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.601229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.601248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.601255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.613889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.613908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.613914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.624707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.624726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.624733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.634808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.634826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.634833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.644202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.644222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.644228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.654944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.654963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.654976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.667324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.667342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.667349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.679000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.679019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.679026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.691430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.691449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.691456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.704150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.704169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.704175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.717455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.717475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.717482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.729286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.729306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.729312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.736810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.736829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.736835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.747936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.747955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.747961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.757943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.757963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.757969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.766516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.766535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.766542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.777479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.777498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.777504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.787949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.787968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.787974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.044 [2024-11-20 07:30:10.798466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.044 [2024-11-20 07:30:10.798483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.044 [2024-11-20 07:30:10.798490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.808819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.808839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.808846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.818944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.818963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.818970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.829273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.829292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.829299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.841205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.841223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.841234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.853848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.853871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.853877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.866722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.866740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.866746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.879503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.879521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.879528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.891376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.891395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.891402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.903436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.903453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.903460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.913756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.913775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.913781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.924276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.924295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.924301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.933740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.933758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.933765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.942596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.942618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.942624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.952961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.952980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.305 [2024-11-20 07:30:10.952986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.305 [2024-11-20 07:30:10.963304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.305 [2024-11-20 07:30:10.963322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.306 [2024-11-20 07:30:10.963329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.306 [2024-11-20 07:30:10.972025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.306 [2024-11-20 07:30:10.972044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.306 [2024-11-20 07:30:10.972050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.306 [2024-11-20 07:30:10.981839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.306 [2024-11-20 07:30:10.981857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.306 [2024-11-20 07:30:10.981868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.306 [2024-11-20 07:30:10.992388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.306 [2024-11-20 07:30:10.992406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.306 [2024-11-20 07:30:10.992413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.306 [2024-11-20 07:30:11.003037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.306 [2024-11-20 07:30:11.003055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.306 [2024-11-20 07:30:11.003061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.306 [2024-11-20 07:30:11.012148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.306 [2024-11-20 07:30:11.012167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.306 [2024-11-20 07:30:11.012174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.306 [2024-11-20 07:30:11.022604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.306 [2024-11-20 07:30:11.022622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.306 [2024-11-20 07:30:11.022629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.306 [2024-11-20 07:30:11.031453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.306 [2024-11-20 07:30:11.031470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.306 [2024-11-20 07:30:11.031477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.306 [2024-11-20 07:30:11.042659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.306 [2024-11-20 07:30:11.042679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.306 [2024-11-20 07:30:11.042686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.306 [2024-11-20 07:30:11.052915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.306 [2024-11-20 07:30:11.052933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.306 [2024-11-20 07:30:11.052940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.306 [2024-11-20 07:30:11.063748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.306 [2024-11-20 07:30:11.063766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.306 [2024-11-20 07:30:11.063773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.567 [2024-11-20 07:30:11.074831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.567 [2024-11-20 07:30:11.074850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.567 [2024-11-20 07:30:11.074856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.567 [2024-11-20 07:30:11.085849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.567 [2024-11-20 07:30:11.085872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.567 [2024-11-20 07:30:11.085879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.567 [2024-11-20 07:30:11.093677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.567 [2024-11-20 07:30:11.093696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.567 [2024-11-20 07:30:11.093702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.567 [2024-11-20 07:30:11.103566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.567 [2024-11-20 07:30:11.103585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.567 [2024-11-20 07:30:11.103591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.567 [2024-11-20 07:30:11.113589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.567 [2024-11-20 07:30:11.113607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.567 [2024-11-20 07:30:11.113617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.567 [2024-11-20 07:30:11.124521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.567 [2024-11-20 07:30:11.124539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.567 [2024-11-20 07:30:11.124545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.567 [2024-11-20 07:30:11.135922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.567 [2024-11-20 07:30:11.135940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.567 [2024-11-20 07:30:11.135946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.567 [2024-11-20 07:30:11.146390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.567 [2024-11-20 07:30:11.146409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.567 [2024-11-20 07:30:11.146415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.567 [2024-11-20 07:30:11.156683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.567 [2024-11-20 07:30:11.156701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.567 [2024-11-20 07:30:11.156708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.567 [2024-11-20 07:30:11.166944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.567 [2024-11-20 07:30:11.166962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.567 [2024-11-20 07:30:11.166968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.567 [2024-11-20 07:30:11.170639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.567 [2024-11-20 07:30:11.170657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.567 [2024-11-20 07:30:11.170664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.567 [2024-11-20 07:30:11.179119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.567 [2024-11-20 07:30:11.179136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.567 [2024-11-20 07:30:11.179143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.190174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.190191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.190198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.199742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.199763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.199769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.207892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.207909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.207916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.219443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.219460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.219466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.230711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.230729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.230735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.241983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.242001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.242008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.252017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.252034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.252040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.262876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.262894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.262900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.274111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.274128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.274134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.284910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.284927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.284933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.296008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.296025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.296031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.304293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.304310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.304316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.312721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.312738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.312744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.568 [2024-11-20 07:30:11.320977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.568 [2024-11-20 07:30:11.320994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.568 [2024-11-20 07:30:11.321001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.829 [2024-11-20 07:30:11.332742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.829 [2024-11-20 07:30:11.332759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.829 [2024-11-20 07:30:11.332766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.829 [2024-11-20 07:30:11.342210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.829 [2024-11-20 07:30:11.342229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.829 [2024-11-20 07:30:11.342235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.829 [2024-11-20 07:30:11.351290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.829 [2024-11-20 07:30:11.351307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.829 [2024-11-20 07:30:11.351314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.829 [2024-11-20 07:30:11.360459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23c85a0) 00:29:36.829 [2024-11-20 07:30:11.360477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.829 [2024-11-20 07:30:11.360483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.829 3022.50 IOPS, 377.81 MiB/s 00:29:36.829 Latency(us) 00:29:36.829 [2024-11-20T06:30:11.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.829 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:36.829 nvme0n1 : 2.00 3023.27 377.91 0.00 0.00 5289.05 1051.31 13544.11 00:29:36.829 [2024-11-20T06:30:11.596Z] =================================================================================================================== 00:29:36.829 [2024-11-20T06:30:11.596Z] Total : 3023.27 377.91 0.00 0.00 5289.05 1051.31 13544.11 00:29:36.829 { 00:29:36.829 "results": [ 00:29:36.829 { 00:29:36.829 "job": "nvme0n1", 00:29:36.829 "core_mask": "0x2", 00:29:36.829 "workload": "randread", 00:29:36.829 "status": "finished", 00:29:36.829 "queue_depth": 16, 00:29:36.829 "io_size": 131072, 00:29:36.829 "runtime": 2.004783, 00:29:36.829 "iops": 3023.269850153358, 00:29:36.829 "mibps": 377.9087312691698, 00:29:36.829 "io_failed": 0, 00:29:36.829 "io_timeout": 0, 00:29:36.829 "avg_latency_us": 5289.050950888191, 00:29:36.829 "min_latency_us": 1051.3066666666666, 00:29:36.829 "max_latency_us": 13544.106666666667 00:29:36.829 } 00:29:36.829 ], 00:29:36.830 "core_count": 1 00:29:36.830 } 00:29:36.830 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:36.830 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:36.830 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:36.830 | .driver_specific 00:29:36.830 | .nvme_error 00:29:36.830 | .status_code 00:29:36.830 | .command_transient_transport_error' 00:29:36.830 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:36.830 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 196 > 0 )) 00:29:36.830 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1472079 00:29:36.830 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1472079 ']' 00:29:36.830 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1472079 00:29:36.830 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:36.830 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:36.830 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1472079 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1472079' 00:29:37.090 killing process with pid 1472079 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1472079 00:29:37.090 Received shutdown signal, test time was about 2.000000 seconds 00:29:37.090 00:29:37.090 Latency(us) 00:29:37.090 [2024-11-20T06:30:11.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.090 [2024-11-20T06:30:11.857Z] =================================================================================================================== 00:29:37.090 [2024-11-20T06:30:11.857Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1472079 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1473300 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1473300 /var/tmp/bperf.sock 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1473300 ']' 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:37.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:37.090 07:30:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:37.090 [2024-11-20 07:30:11.779832] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:29:37.090 [2024-11-20 07:30:11.779895] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473300 ] 00:29:37.351 [2024-11-20 07:30:11.869258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.351 [2024-11-20 07:30:11.898713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.921 07:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:37.921 07:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:37.921 07:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:37.921 07:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:38.182 07:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:38.182 07:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.182 07:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.182 07:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.182 07:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.182 07:30:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.442 nvme0n1 00:29:38.442 07:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:38.442 07:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.442 07:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.442 07:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.442 07:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:38.442 07:30:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:38.703 Running I/O for 2 seconds... 00:29:38.703 [2024-11-20 07:30:13.254905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.255106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.255132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.267537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.267719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.267736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.280152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.280333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.280349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.292743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.292933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.292950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.305347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.305530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.305546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.317941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.318121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.318138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.330526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.330708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.330724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.343106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.343285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.343301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.355678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.355871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.355888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.368261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.368440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.368456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.380838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.381022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.381039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.393405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.393584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.393600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.405961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.406139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.406155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.418512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.418690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.418706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.431072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.431249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.431265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.443644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.443822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.443838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.703 [2024-11-20 07:30:13.456226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.703 [2024-11-20 07:30:13.456405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.703 [2024-11-20 07:30:13.456421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.964 [2024-11-20 07:30:13.468780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.964 [2024-11-20 07:30:13.468965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.964 [2024-11-20 07:30:13.468981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.964 [2024-11-20 07:30:13.481342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.964 [2024-11-20 07:30:13.481520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.964 [2024-11-20 07:30:13.481536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.964 [2024-11-20 07:30:13.493888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.964 [2024-11-20 07:30:13.494064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.964 [2024-11-20 07:30:13.494079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.964 [2024-11-20 07:30:13.506444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.964 [2024-11-20 07:30:13.506625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.964 [2024-11-20 07:30:13.506641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.964 [2024-11-20 07:30:13.518970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.964 [2024-11-20 07:30:13.519150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.519166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.531573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.531751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.531767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.544099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.544276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.544292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.556680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.556860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.556880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.569202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.569380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.569398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.581783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.581967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.581983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.594315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.594494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.594511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.606908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.607087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.607102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.619456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.619634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.619649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.632035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.632215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.632231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.644601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.644777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.644793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.657161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.657339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.657355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.669713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.669901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.669917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.682272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.682455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.682471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.694817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.695006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.695023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.707367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.707550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.707566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:38.965 [2024-11-20 07:30:13.719915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:38.965 [2024-11-20 07:30:13.720093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.965 [2024-11-20 07:30:13.720109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.227 [2024-11-20 07:30:13.732638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.227 [2024-11-20 07:30:13.732819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.227 [2024-11-20 07:30:13.732835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.227 [2024-11-20 07:30:13.745211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.227 [2024-11-20 07:30:13.745389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.227 [2024-11-20 07:30:13.745405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.227 [2024-11-20 07:30:13.757745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.227 [2024-11-20 07:30:13.757931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.227 [2024-11-20 07:30:13.757947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.227 [2024-11-20 07:30:13.770295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.227 [2024-11-20 07:30:13.770475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.227 [2024-11-20 07:30:13.770491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.227 [2024-11-20 07:30:13.782859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.227 [2024-11-20 07:30:13.783043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.227 [2024-11-20 07:30:13.783059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.227 [2024-11-20 07:30:13.795423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.227 [2024-11-20 07:30:13.795600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.227 [2024-11-20 07:30:13.795616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.227 [2024-11-20 07:30:13.807982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.227 [2024-11-20 07:30:13.808160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.227 [2024-11-20 07:30:13.808175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.227 [2024-11-20 07:30:13.820736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.820925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.820942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.228 [2024-11-20 07:30:13.833288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.833468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.833483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.228 [2024-11-20 07:30:13.845878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.846057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.846073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.228 [2024-11-20 07:30:13.858445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.858623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.858639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.228 [2024-11-20 07:30:13.871014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.871195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.871210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.228 [2024-11-20 07:30:13.883552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.883730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.883746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.228 [2024-11-20 07:30:13.896116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.896296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.896314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.228 [2024-11-20 07:30:13.908663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.908839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.908855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.228 [2024-11-20 07:30:13.921244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.921423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.921438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.228 [2024-11-20 07:30:13.933790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.933971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.933987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.228 [2024-11-20 07:30:13.946346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.946523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.946539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.228 [2024-11-20 07:30:13.958890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.959067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.959083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.228 [2024-11-20 07:30:13.971418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.971597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.971613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.228 [2024-11-20 07:30:13.983975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.228 [2024-11-20 07:30:13.984152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.228 [2024-11-20 07:30:13.984169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.490 [2024-11-20 07:30:13.996530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:13.996708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:13.996724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.009090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.009274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.009289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.021613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.021790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.021806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.034150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.034328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.034343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.046694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.046873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.046888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.059215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.059394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.059409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.071785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.071969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.071985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.084298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.084476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.084492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.096858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.097041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.097058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.109375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.109552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.109569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.121949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.122127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.122142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.134445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.134621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.134637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.147024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.147204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.147219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.159599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.159778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.159794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.172209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.172388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.172404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.184745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.184928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.184944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.197293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.197472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.197487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.209843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.210026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.210042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.222376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.222555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.222574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 [2024-11-20 07:30:14.234912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.235089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.235105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.491 20158.00 IOPS, 78.74 MiB/s [2024-11-20T06:30:14.258Z] [2024-11-20 07:30:14.247455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.491 [2024-11-20 07:30:14.247633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.491 [2024-11-20 07:30:14.247649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.260038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.260216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.260231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.272587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.272767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.272783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.285133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.285311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.285327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.297653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.297833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.297849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.310199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.310377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.310393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.322743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.322928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.322944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.335307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.335489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.335505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.347841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.348027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.348042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.360378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.360556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.360572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.372962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.373140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.373155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.385513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.385691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.385706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.398057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.398234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.398250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.410597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.410774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.410790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.423122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.423301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.423317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.435655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.435833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.435849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.448194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.448371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.448386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.460753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.460938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.460954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.473326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.753 [2024-11-20 07:30:14.473506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.753 [2024-11-20 07:30:14.473522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.753 [2024-11-20 07:30:14.485867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.754 [2024-11-20 07:30:14.486045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.754 [2024-11-20 07:30:14.486061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.754 [2024-11-20 07:30:14.498424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.754 [2024-11-20 07:30:14.498600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.754 [2024-11-20 07:30:14.498616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:39.754 [2024-11-20 07:30:14.510976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:39.754 [2024-11-20 07:30:14.511153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.754 [2024-11-20 07:30:14.511168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.523526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.523704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.523720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.536082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.536260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.536276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.548658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.548835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.548854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.561208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.561384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.561400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.573760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.573943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.573958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.586321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.586497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.586513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.598895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.599075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.599091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.611438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.611615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.611631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.623958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.624137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.624152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.636523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.636702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.636718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.649048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.649225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.649241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.661612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.661794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.661809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.674145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.674323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.674339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.686736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.686919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.686935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.699236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.699415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.699431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.711786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.711969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.711985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.724342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.724519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.724535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.736897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.737075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.737091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.749449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.749628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.749644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.762005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.762181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.762196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.016 [2024-11-20 07:30:14.774561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.016 [2024-11-20 07:30:14.774738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.016 [2024-11-20 07:30:14.774754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.787124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.787302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.787318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.799674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.799852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.799871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.812238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.812416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.812432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.824977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.825159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.825174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.837521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.837698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.837714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.850077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.850256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.850272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.862615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.862794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.862809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.875181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.875362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.875381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.887735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.887920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.887936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.900286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.900465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.900480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.912829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.913014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.913030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.925399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.925576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.925592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.938007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.938188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.938204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.950574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.950752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.950768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.963119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.963301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.963316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.975662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.975840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.975855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:14.988218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:14.988396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:14.988412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:15.000832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:15.001016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:15.001032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:15.013410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:15.013587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:15.013603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:15.025971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:15.026150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.278 [2024-11-20 07:30:15.026165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.278 [2024-11-20 07:30:15.038527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.278 [2024-11-20 07:30:15.038705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.279 [2024-11-20 07:30:15.038721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.540 [2024-11-20 07:30:15.051098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.540 [2024-11-20 07:30:15.051275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.540 [2024-11-20 07:30:15.051291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.540 [2024-11-20 07:30:15.063669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.540 [2024-11-20 07:30:15.063848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.540 [2024-11-20 07:30:15.063867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.540 [2024-11-20 07:30:15.076225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.540 [2024-11-20 07:30:15.076403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.540 [2024-11-20 07:30:15.076418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.540 [2024-11-20 07:30:15.088817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.541 [2024-11-20 07:30:15.089000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.541 [2024-11-20 07:30:15.089019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.541 [2024-11-20 07:30:15.101359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.541 [2024-11-20 07:30:15.101537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.541 [2024-11-20 07:30:15.101552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.541 [2024-11-20 07:30:15.113916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.541 [2024-11-20 07:30:15.114091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.541 [2024-11-20 07:30:15.114107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.541 [2024-11-20 07:30:15.126458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.541 [2024-11-20 07:30:15.126636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.541 [2024-11-20 07:30:15.126651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.541 [2024-11-20 07:30:15.139020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.541 [2024-11-20 07:30:15.139197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.541 [2024-11-20 07:30:15.139212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.541 [2024-11-20 07:30:15.151584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.541 [2024-11-20 07:30:15.151762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.541 [2024-11-20 07:30:15.151778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.541 [2024-11-20 07:30:15.164175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.541 [2024-11-20 07:30:15.164353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.541 [2024-11-20 07:30:15.164368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.541 [2024-11-20 07:30:15.176740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.541 [2024-11-20 07:30:15.176924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.541 [2024-11-20 07:30:15.176940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.541 [2024-11-20 07:30:15.189309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.541 [2024-11-20 07:30:15.189486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.541 [2024-11-20 07:30:15.189502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.541 [2024-11-20 07:30:15.201872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.541 [2024-11-20 07:30:15.202057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.541 [2024-11-20 07:30:15.202072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.541 [2024-11-20 07:30:15.214412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.541 [2024-11-20 07:30:15.214590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.541 [2024-11-20 07:30:15.214605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.541 [2024-11-20 07:30:15.226988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.541 [2024-11-20 07:30:15.227166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.541 [2024-11-20 07:30:15.227181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.541 [2024-11-20 07:30:15.239504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234500) with pdu=0x200016efda78 00:29:40.541 [2024-11-20 07:30:15.240319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.541 [2024-11-20 07:30:15.240335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:40.541 20259.50 IOPS, 79.14 MiB/s 00:29:40.541 Latency(us) 00:29:40.541 [2024-11-20T06:30:15.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.541 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.541 nvme0n1 : 2.01 20260.37 79.14 0.00 0.00 6305.56 5106.35 15291.73 00:29:40.541 [2024-11-20T06:30:15.308Z] =================================================================================================================== 00:29:40.541 [2024-11-20T06:30:15.308Z] Total : 20260.37 79.14 0.00 0.00 6305.56 5106.35 15291.73 00:29:40.541 { 00:29:40.541 "results": [ 00:29:40.541 { 00:29:40.541 "job": "nvme0n1", 00:29:40.541 "core_mask": "0x2", 00:29:40.541 "workload": "randwrite", 00:29:40.541 "status": "finished", 00:29:40.541 "queue_depth": 128, 00:29:40.541 "io_size": 4096, 00:29:40.541 "runtime": 2.007367, 00:29:40.541 "iops": 20260.370923702543, 00:29:40.541 "mibps": 79.14207392071306, 00:29:40.541 "io_failed": 0, 00:29:40.541 "io_timeout": 0, 00:29:40.541 "avg_latency_us": 6305.5601921154, 00:29:40.541 "min_latency_us": 5106.346666666666, 00:29:40.541 "max_latency_us": 15291.733333333334 00:29:40.541 } 00:29:40.541 ], 00:29:40.541 "core_count": 1 00:29:40.541 } 00:29:40.541 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:40.541 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:40.541 | .driver_specific 00:29:40.541 | .nvme_error 00:29:40.541 | .status_code 00:29:40.541 | .command_transient_transport_error' 00:29:40.541 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:40.541 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:40.802 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:29:40.802 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1473300 00:29:40.802 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1473300 ']' 00:29:40.802 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1473300 00:29:40.802 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:40.802 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:40.802 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1473300 00:29:40.802 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:40.802 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:40.802 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1473300' 00:29:40.802 killing process with pid 1473300 00:29:40.802 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1473300 00:29:40.802 Received shutdown signal, test time was about 2.000000 seconds 00:29:40.802 00:29:40.802 Latency(us) 00:29:40.802 [2024-11-20T06:30:15.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.802 [2024-11-20T06:30:15.569Z] =================================================================================================================== 00:29:40.802 [2024-11-20T06:30:15.570Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:40.803 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1473300 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1474117 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1474117 /var/tmp/bperf.sock 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1474117 ']' 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:41.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:41.064 07:30:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.064 [2024-11-20 07:30:15.666295] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:29:41.064 [2024-11-20 07:30:15.666353] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474117 ] 00:29:41.064 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:41.064 Zero copy mechanism will not be used. 00:29:41.064 [2024-11-20 07:30:15.755893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.064 [2024-11-20 07:30:15.785362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.007 07:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:42.007 07:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:42.007 07:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:42.007 07:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:42.007 07:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:42.007 07:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.007 07:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.007 07:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.007 07:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:42.007 07:30:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:42.580 nvme0n1 00:29:42.580 07:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:42.580 07:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.580 07:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.580 07:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.580 07:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:42.580 07:30:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:42.580 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:42.580 Zero copy mechanism will not be used. 00:29:42.580 Running I/O for 2 seconds... 00:29:42.580 [2024-11-20 07:30:17.158867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.580 [2024-11-20 07:30:17.158943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.580 [2024-11-20 07:30:17.158968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.163673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.163753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.163780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.168310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.168406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.168429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.172100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.172155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.172175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.175789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.175852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.175876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.180227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.180308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.180324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.184904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.184971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.184987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.190483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.190561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.190576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.195338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.195419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.195437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.202192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.202256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.202272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.206952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.207037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.207054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.211152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.211249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.211265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.216505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.216571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.216590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.223454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.223555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.223571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.230802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.230912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.230928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.237618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.237729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.237745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.242583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.242643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.242664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.247301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.247363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.247384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.251869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.251926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.251947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.256426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.256491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.256510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.261269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.261343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.261364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.265977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.266037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.266057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.270746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.270812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.270832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.275739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.275792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.275812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.280265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.280352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.581 [2024-11-20 07:30:17.280373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.581 [2024-11-20 07:30:17.284820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.581 [2024-11-20 07:30:17.284896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.284918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.582 [2024-11-20 07:30:17.289338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.582 [2024-11-20 07:30:17.289404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.289423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.582 [2024-11-20 07:30:17.293897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.582 [2024-11-20 07:30:17.293951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.293970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.582 [2024-11-20 07:30:17.298398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.582 [2024-11-20 07:30:17.298448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.298464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.582 [2024-11-20 07:30:17.302855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.582 [2024-11-20 07:30:17.302922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.302938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.582 [2024-11-20 07:30:17.307407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.582 [2024-11-20 07:30:17.307474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.307491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.582 [2024-11-20 07:30:17.311916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.582 [2024-11-20 07:30:17.311970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.311986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.582 [2024-11-20 07:30:17.316454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.582 [2024-11-20 07:30:17.316508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.316528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.582 [2024-11-20 07:30:17.320784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.582 [2024-11-20 07:30:17.320841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.320868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.582 [2024-11-20 07:30:17.325202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.582 [2024-11-20 07:30:17.325263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.325282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.582 [2024-11-20 07:30:17.329256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.582 [2024-11-20 07:30:17.329336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.329355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.582 [2024-11-20 07:30:17.334445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.582 [2024-11-20 07:30:17.334498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.334516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.582 [2024-11-20 07:30:17.339041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.582 [2024-11-20 07:30:17.339108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.339129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.582 [2024-11-20 07:30:17.343312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.582 [2024-11-20 07:30:17.343371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.582 [2024-11-20 07:30:17.343395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.347635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.347697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.347716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.352724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.352818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.352835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.358934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.359060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.359077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.364589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.364657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.364676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.368714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.368768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.368787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.372764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.372836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.372857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.376895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.376953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.376974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.380992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.381049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.381067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.384809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.384877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.384894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.388871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.388930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.388950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.392672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.392723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.392741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.396568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.396635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.396654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.400906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.400988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.401005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.406783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.406898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.406914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.411306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.845 [2024-11-20 07:30:17.411385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.845 [2024-11-20 07:30:17.411403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.845 [2024-11-20 07:30:17.415618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.415681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.415702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.420189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.420264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.420283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.424374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.424425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.424445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.428610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.428707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.428724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.434954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.435023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.435044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.439164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.439226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.439247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.443438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.443498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.443519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.447688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.447755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.447776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.451831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.451900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.451920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.456028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.456095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.456113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.460085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.460152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.460175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.464283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.464342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.464362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.468446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.468501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.468521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.472467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.472535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.472551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.476500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.476566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.476586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.480435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.480514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.480535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.484373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.484450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.484471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.490442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.490498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.490517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.494417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.494468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.494488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.498421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.498486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.498507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.502477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.502534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.502554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.506572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.506634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.506653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.510604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.510682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.510705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.514672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.514734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.514753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.518661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.518716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.846 [2024-11-20 07:30:17.518737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.846 [2024-11-20 07:30:17.522650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.846 [2024-11-20 07:30:17.522704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.522724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.526750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.526806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.526825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.530654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.530719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.530737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.534607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.534662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.534683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.538640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.538695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.538710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.542650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.542703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.542721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.546825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.546907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.546927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.550818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.550880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.550898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.555003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.555060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.555081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.559072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.559153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.559175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.563001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.563056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.563073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.566957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.567021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.567046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.570730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.570793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.570814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.574897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.574990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.575006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.578973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.579035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.579054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.582870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.582926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.582946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.587446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.587525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.587541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.592148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.592226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.592243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.596527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.596583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.596604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.600588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.600646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.600666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.847 [2024-11-20 07:30:17.604648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:42.847 [2024-11-20 07:30:17.604710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.847 [2024-11-20 07:30:17.604731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.608704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.608762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.608783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.612799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.612856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.612884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.616761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.616846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.616869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.620778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.620857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.620882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.624627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.624706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.624724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.628633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.628701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.628721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.632389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.632446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.632466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.636375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.636429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.636450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.640354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.640419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.640438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.644726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.644810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.644832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.649842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.649933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.649955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.656430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.656503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.656524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.660745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.660799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.660819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.664813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.664872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.664892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.669207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.669315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.669331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.674541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.674610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.674632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.678630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.678698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.678722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.682583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.682665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.682684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.686890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.686952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.686973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.691175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.691240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.691260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.695310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.110 [2024-11-20 07:30:17.695392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.110 [2024-11-20 07:30:17.695414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.110 [2024-11-20 07:30:17.699445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.699511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.699531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.704074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.704175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.704190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.709340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.709392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.709411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.713353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.713419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.713439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.717336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.717404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.717426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.721403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.721467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.721485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.725428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.725494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.725514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.729535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.729594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.729612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.733530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.733583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.733604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.737541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.737606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.737626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.741551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.741612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.741632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.745727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.745788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.745804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.749875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.749934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.749956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.754035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.754114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.754136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.757964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.758028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.758049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.762062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.762122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.762143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.765844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.765903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.765922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.769704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.769759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.769778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.773692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.773744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.773762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.777668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.777751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.777772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.781644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.781698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.781716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.785496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.785548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.785572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.789401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.789466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.789485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.793384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.793441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.793461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.797261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.797321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.797343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.801296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.801357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.111 [2024-11-20 07:30:17.801376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.111 [2024-11-20 07:30:17.805162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.111 [2024-11-20 07:30:17.805223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.805241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.809682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.809766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.809781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.814648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.814778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.814798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.819121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.819185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.819205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.823273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.823336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.823354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.827369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.827424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.827444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.831481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.831539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.831559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.835460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.835532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.835550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.839528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.839612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.839632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.843326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.843381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.843401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.847388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.847437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.847456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.851291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.851354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.851374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.855331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.855396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.855415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.859515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.859569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.859588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.864212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.864266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.864283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.112 [2024-11-20 07:30:17.868721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.112 [2024-11-20 07:30:17.868937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.112 [2024-11-20 07:30:17.868954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.873736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.873807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.873825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.878052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.878123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.878143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.882315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.882387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.882405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.886626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.886689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.886708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.890881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.890955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.890974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.894912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.894970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.894992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.898757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.898835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.898855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.902909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.902967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.902989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.906947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.907005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.907025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.910994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.911044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.911060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.916543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.916615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.916634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.921425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.921513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.921537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.926486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.926546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.926566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.930695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.930769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.930789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.935042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.935103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.935122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.939250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.939357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.939373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.944856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.944952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.944971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.948915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.948975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.948999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.952783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.952849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.952877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.375 [2024-11-20 07:30:17.956634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.375 [2024-11-20 07:30:17.956710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.375 [2024-11-20 07:30:17.956732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:17.960593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:17.960675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:17.960696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:17.964563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:17.964626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:17.964648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:17.968553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:17.968610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:17.968632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:17.972640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:17.972710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:17.972731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:17.976442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:17.976497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:17.976513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:17.980629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:17.980683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:17.980704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:17.984414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:17.984474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:17.984494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:17.988300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:17.988356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:17.988372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:17.992399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:17.992466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:17.992487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:17.996429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:17.996505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:17.996527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.000254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.000314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.000333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.004445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.004511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.004539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.008613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.008668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.008689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.012752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.012829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.012852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.016533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.016590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.016609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.020680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.020765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.020786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.024705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.024785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.024807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.028579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.028628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.028647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.032575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.032634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.032656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.036595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.036655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.036676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.040473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.040539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.040556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.044599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.044660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.044683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.048690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.048745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.048763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.052680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.052735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.052756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.056711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.056768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.056787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.060740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.060794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.376 [2024-11-20 07:30:18.060813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.376 [2024-11-20 07:30:18.064876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.376 [2024-11-20 07:30:18.064947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.064967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.068870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.068920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.068938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.072990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.073072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.073090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.076971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.077029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.077048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.081090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.081141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.081159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.084957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.085014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.085032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.088807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.088857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.088883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.092653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.092706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.092725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.096414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.096471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.096491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.100366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.100429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.100449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.104340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.104402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.104420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.108435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.108495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.108521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.112465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.112543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.112562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.116668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.116750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.116771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.120563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.120615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.120631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.124374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.124441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.124461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.128455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.128521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.128543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.132396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.132447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.132466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.377 [2024-11-20 07:30:18.136405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.377 [2024-11-20 07:30:18.136481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.377 [2024-11-20 07:30:18.136501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.640 [2024-11-20 07:30:18.140149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.640 [2024-11-20 07:30:18.140200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.640 [2024-11-20 07:30:18.140218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.640 [2024-11-20 07:30:18.143917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.640 [2024-11-20 07:30:18.143981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.640 [2024-11-20 07:30:18.144001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.640 [2024-11-20 07:30:18.147824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.640 [2024-11-20 07:30:18.147912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.640 [2024-11-20 07:30:18.147932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.640 [2024-11-20 07:30:18.151922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.640 [2024-11-20 07:30:18.151979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.640 [2024-11-20 07:30:18.151996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.156895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.156953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.156973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.641 7212.00 IOPS, 901.50 MiB/s [2024-11-20T06:30:18.408Z] [2024-11-20 07:30:18.162905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.162967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.162987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.166998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.167071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.167090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.171212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.171281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.171301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.175326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.175381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.175401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.179550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.179611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.179631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.183611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.183669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.183690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.187524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.187601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.187622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.191583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.191666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.191687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.195823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.195912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.195935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.199927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.200010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.200030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.204018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.204098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.204118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.207993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.208049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.208068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.212688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.212742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.212761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.217431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.217497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.217519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.221501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.221583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.221605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.225594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.225673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.225694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.229693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.229755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.229774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.233799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.233852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.233878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.237964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.238041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.238059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.241921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.241983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.242003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.245848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.245906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.245924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.249643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.249702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.249722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.253663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.253749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.253766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.257470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.257535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.257553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.261547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.261610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.641 [2024-11-20 07:30:18.261628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.641 [2024-11-20 07:30:18.266066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.641 [2024-11-20 07:30:18.266123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.266142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.270023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.270096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.270116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.274387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.274436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.274456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.279247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.279334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.279354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.283923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.283974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.283992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.288259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.288316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.288335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.292403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.292458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.292479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.296564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.296629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.296649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.300666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.300737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.300757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.304765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.304822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.304846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.308757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.308818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.308836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.312677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.312727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.312744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.316446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.316501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.316521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.320427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.320478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.320494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.324458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.324540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.324562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.328233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.328285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.328305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.332172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.332229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.332249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.336177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.336243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.336264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.340065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.340115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.340132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.343829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.343884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.343902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.347606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.347665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.347684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.351698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.351760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.351780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.355817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.355908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.355924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.359910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.359968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.359987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.363840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.363899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.363918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.642 [2024-11-20 07:30:18.367855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.642 [2024-11-20 07:30:18.367923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.642 [2024-11-20 07:30:18.367942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.643 [2024-11-20 07:30:18.372017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.643 [2024-11-20 07:30:18.372069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.643 [2024-11-20 07:30:18.372089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.643 [2024-11-20 07:30:18.375916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.643 [2024-11-20 07:30:18.375967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.643 [2024-11-20 07:30:18.375984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.643 [2024-11-20 07:30:18.379794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.643 [2024-11-20 07:30:18.379872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.643 [2024-11-20 07:30:18.379892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.643 [2024-11-20 07:30:18.383566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.643 [2024-11-20 07:30:18.383619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.643 [2024-11-20 07:30:18.383638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.643 [2024-11-20 07:30:18.387500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.643 [2024-11-20 07:30:18.387560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.643 [2024-11-20 07:30:18.387576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.643 [2024-11-20 07:30:18.391728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.643 [2024-11-20 07:30:18.391803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.643 [2024-11-20 07:30:18.391826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.643 [2024-11-20 07:30:18.395816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.643 [2024-11-20 07:30:18.395884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.643 [2024-11-20 07:30:18.395906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.643 [2024-11-20 07:30:18.400256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.643 [2024-11-20 07:30:18.400344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.643 [2024-11-20 07:30:18.400366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.405308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.405392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.405413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.409892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.409967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.409986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.415103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.415189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.415205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.421742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.421916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.421932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.430982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.431085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.431101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.438139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.438244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.438260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.445455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.445573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.445592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.451714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.451764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.451780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.456739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.456821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.456843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.461454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.461535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.461557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.465961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.466035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.466053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.470117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.470182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.470201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.474739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.474798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.474819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.906 [2024-11-20 07:30:18.479200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.906 [2024-11-20 07:30:18.479262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.906 [2024-11-20 07:30:18.479281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.483607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.483678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.483698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.488101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.488160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.488182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.492443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.492502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.492522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.496919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.496984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.497003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.501350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.501414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.501434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.505820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.505896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.505917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.510400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.510494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.510511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.517498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.517608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.517624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.524527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.524618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.524634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.532164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.532326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.532342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.539274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.539366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.539382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.546037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.546127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.546143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.551226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.551336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.551352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.556341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.556450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.556466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.562522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.562623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.562639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.568481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.568590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.568605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.572721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.572786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.572807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.576785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.576839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.576860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.580855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.580921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.580944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.584922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.584990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.585009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.589004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.589084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.589106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.593132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.593196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.593218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.597060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.597111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.597130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.601261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.601321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.601341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.907 [2024-11-20 07:30:18.605180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.907 [2024-11-20 07:30:18.605234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.907 [2024-11-20 07:30:18.605251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.609295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.609359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.609381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.613414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.613474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.613496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.617434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.617496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.617517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.621357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.621420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.621438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.625311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.625361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.625381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.629068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.629122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.629142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.633174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.633229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.633248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.637243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.637303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.637324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.642910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.643023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.643039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.648403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.648468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.648488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.652899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.652952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.652968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.657524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.657577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.657596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.661550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.661611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.661630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.908 [2024-11-20 07:30:18.665706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:43.908 [2024-11-20 07:30:18.665762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.908 [2024-11-20 07:30:18.665783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.669845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.669911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.669932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.673886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.673947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.673966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.677911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.677983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.678004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.681721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.681772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.681790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.685687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.685752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.685772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.689844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.689901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.689924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.694071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.694142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.694160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.698339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.698392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.698410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.703554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.703604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.703622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.708264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.708374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.708391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.713758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.713838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.713856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.720615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.720715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.720732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.725348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.725399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.725415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.730207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.730262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.730284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.734978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.735038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.735058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.739249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.739328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.739351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.743727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.743790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.743808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.748409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.748472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.748494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.753178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.753277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.753293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.759902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.759956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.759974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.764879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.764934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.764955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.769674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.769734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.769754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.773778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.773902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.773918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.778216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.778410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.778426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.782111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.172 [2024-11-20 07:30:18.782291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.172 [2024-11-20 07:30:18.782306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.172 [2024-11-20 07:30:18.786067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.786245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.786261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.789974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.790147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.790163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.793677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.793852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.793877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.797263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.797442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.797459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.801189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.801368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.801384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.805381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.805559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.805575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.809380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.809555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.809575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.813078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.813252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.813268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.817105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.817285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.817305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.820855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.821029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.821046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.824510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.824688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.824710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.828113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.828289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.828308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.832665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.832844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.832860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.837656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.837785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.837801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.842150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.842322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.842339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.846006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.846183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.846202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.849812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.849998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.850015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.853807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.853988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.854004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.857820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.858000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.858016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.861924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.862095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.862111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.865732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.865913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.865930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.869456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.869625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.869641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.873281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.873459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.873475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.877084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.173 [2024-11-20 07:30:18.877253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.173 [2024-11-20 07:30:18.877269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.173 [2024-11-20 07:30:18.880926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.881101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.881122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.884495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.884676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.884696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.888056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.888233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.888254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.891714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.891890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.891907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.895563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.895742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.895762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.899145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.899327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.899350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.902704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.902913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.902934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.906235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.906406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.906425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.909773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.909954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.909977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.913310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.913490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.913507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.917081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.917261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.917282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.920648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.920844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.920874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.924184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.924358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.924377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.927706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.927893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.927913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.174 [2024-11-20 07:30:18.931248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.174 [2024-11-20 07:30:18.931449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.174 [2024-11-20 07:30:18.931468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:18.934768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:18.934943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:18.934966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:18.938593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:18.938765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:18.938781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:18.944117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:18.944280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:18.944297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:18.947938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:18.948114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:18.948130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:18.951755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:18.951941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:18.951961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:18.955324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:18.955509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:18.955530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:18.958997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:18.959173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:18.959193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:18.962567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:18.962745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:18.962766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:18.966513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:18.966687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:18.966703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:18.972518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:18.972814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:18.972831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:18.980018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:18.980211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:18.980227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:18.987574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:18.987810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:18.987826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:18.996039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:18.996219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:18.996236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:19.003873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:19.004084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:19.004101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:19.011797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:19.011971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:19.011988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:19.019143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:19.019433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:19.019451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.436 [2024-11-20 07:30:19.026810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.436 [2024-11-20 07:30:19.027063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.436 [2024-11-20 07:30:19.027080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.034279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.034535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.034551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.041952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.042070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.042086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.049359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.049597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.049617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.056986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.057152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.057169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.064213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.064378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.064395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.069093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.069267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.069283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.073048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.073223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.073239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.077011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.077185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.077202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.080921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.081101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.081117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.084922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.085099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.085115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.088809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.088992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.089014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.092409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.092583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.092605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.095993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.096170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.096186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.099944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.100122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.100138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.103696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.103887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.103907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.107274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.107454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.107474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.110894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.111066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.111084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.114824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.115003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.115019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.118524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.118704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.118725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.122122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.122293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.122309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.125965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.126144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.126164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.129539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.129719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.129739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.133222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.133400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.133419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.136941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.137118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.137138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.141347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.141520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.141536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.145806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.145987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.146004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.151366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.437 [2024-11-20 07:30:19.151538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.437 [2024-11-20 07:30:19.151554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.437 [2024-11-20 07:30:19.155217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.438 [2024-11-20 07:30:19.155391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.438 [2024-11-20 07:30:19.155407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.438 [2024-11-20 07:30:19.159083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.438 [2024-11-20 07:30:19.159256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.438 [2024-11-20 07:30:19.159276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.438 7050.50 IOPS, 881.31 MiB/s [2024-11-20T06:30:19.205Z] [2024-11-20 07:30:19.164088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1234840) with pdu=0x200016eff3c8 00:29:44.438 [2024-11-20 07:30:19.164161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.438 [2024-11-20 07:30:19.164181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.438 00:29:44.438 Latency(us) 00:29:44.438 [2024-11-20T06:30:19.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.438 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:44.438 nvme0n1 : 2.00 7047.79 880.97 0.00 0.00 2266.33 1331.20 8192.00 00:29:44.438 [2024-11-20T06:30:19.205Z] =================================================================================================================== 00:29:44.438 [2024-11-20T06:30:19.205Z] Total : 7047.79 880.97 0.00 0.00 2266.33 1331.20 8192.00 00:29:44.438 { 00:29:44.438 "results": [ 00:29:44.438 { 00:29:44.438 "job": "nvme0n1", 00:29:44.438 "core_mask": "0x2", 00:29:44.438 "workload": "randwrite", 00:29:44.438 "status": "finished", 00:29:44.438 "queue_depth": 16, 00:29:44.438 "io_size": 131072, 00:29:44.438 "runtime": 2.003039, 00:29:44.438 "iops": 7047.790881755173, 00:29:44.438 "mibps": 880.9738602193967, 00:29:44.438 "io_failed": 0, 00:29:44.438 "io_timeout": 0, 00:29:44.438 "avg_latency_us": 2266.3261523930955, 00:29:44.438 "min_latency_us": 1331.2, 00:29:44.438 "max_latency_us": 8192.0 00:29:44.438 } 00:29:44.438 ], 00:29:44.438 "core_count": 1 00:29:44.438 } 00:29:44.438 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:44.438 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:44.438 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:44.438 | .driver_specific 00:29:44.438 | .nvme_error 00:29:44.438 | .status_code 00:29:44.438 | .command_transient_transport_error' 00:29:44.438 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:44.697 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 456 > 0 )) 00:29:44.697 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1474117 00:29:44.697 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1474117 ']' 00:29:44.697 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1474117 00:29:44.697 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:44.697 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:44.697 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1474117 00:29:44.697 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:44.697 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:44.697 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1474117' 00:29:44.697 killing process with pid 1474117 00:29:44.697 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1474117 00:29:44.697 Received shutdown signal, test time was about 2.000000 seconds 00:29:44.697 00:29:44.697 Latency(us) 00:29:44.697 [2024-11-20T06:30:19.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.697 [2024-11-20T06:30:19.464Z] =================================================================================================================== 00:29:44.697 [2024-11-20T06:30:19.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.697 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1474117 00:29:44.958 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1471232 00:29:44.958 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1471232 ']' 00:29:44.958 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1471232 00:29:44.958 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:44.958 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:44.958 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1471232 00:29:44.958 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:44.958 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:44.958 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1471232' 00:29:44.958 killing process with pid 1471232 00:29:44.958 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1471232 00:29:44.958 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1471232 00:29:44.958 00:29:44.958 real 0m16.646s 00:29:44.958 user 0m32.978s 00:29:44.958 sys 0m3.544s 00:29:44.958 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:44.958 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.958 ************************************ 00:29:44.958 END TEST nvmf_digest_error 00:29:44.958 ************************************ 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:45.219 rmmod nvme_tcp 00:29:45.219 rmmod nvme_fabrics 00:29:45.219 rmmod nvme_keyring 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1471232 ']' 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1471232 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 1471232 ']' 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 1471232 00:29:45.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1471232) - No such process 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 1471232 is not found' 00:29:45.219 Process with pid 1471232 is not found 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.219 07:30:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.763 07:30:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.763 00:29:47.763 real 0m43.614s 00:29:47.763 user 1m7.704s 00:29:47.763 sys 0m13.284s 00:29:47.763 07:30:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:47.763 07:30:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:47.763 ************************************ 00:29:47.763 END TEST nvmf_digest 00:29:47.763 ************************************ 00:29:47.763 07:30:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:47.763 07:30:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:47.763 07:30:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:47.763 07:30:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:47.763 07:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:47.763 07:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:47.763 07:30:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.763 ************************************ 00:29:47.763 START TEST nvmf_bdevperf 00:29:47.763 ************************************ 00:29:47.763 07:30:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:47.763 * Looking for test storage... 00:29:47.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:47.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.763 --rc genhtml_branch_coverage=1 00:29:47.763 --rc genhtml_function_coverage=1 00:29:47.763 --rc genhtml_legend=1 00:29:47.763 --rc geninfo_all_blocks=1 00:29:47.763 --rc geninfo_unexecuted_blocks=1 00:29:47.763 00:29:47.763 ' 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:47.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.763 --rc genhtml_branch_coverage=1 00:29:47.763 --rc genhtml_function_coverage=1 00:29:47.763 --rc genhtml_legend=1 00:29:47.763 --rc geninfo_all_blocks=1 00:29:47.763 --rc geninfo_unexecuted_blocks=1 00:29:47.763 00:29:47.763 ' 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:47.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.763 --rc genhtml_branch_coverage=1 00:29:47.763 --rc genhtml_function_coverage=1 00:29:47.763 --rc genhtml_legend=1 00:29:47.763 --rc geninfo_all_blocks=1 00:29:47.763 --rc geninfo_unexecuted_blocks=1 00:29:47.763 00:29:47.763 ' 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:47.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.763 --rc genhtml_branch_coverage=1 00:29:47.763 --rc genhtml_function_coverage=1 00:29:47.763 --rc genhtml_legend=1 00:29:47.763 --rc geninfo_all_blocks=1 00:29:47.763 --rc geninfo_unexecuted_blocks=1 00:29:47.763 00:29:47.763 ' 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.763 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:47.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:47.764 07:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:55.980 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:55.980 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:55.980 Found net devices under 0000:31:00.0: cvl_0_0 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:55.980 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:55.981 Found net devices under 0000:31:00.1: cvl_0_1 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:55.981 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:56.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:29:56.312 00:29:56.312 --- 10.0.0.2 ping statistics --- 00:29:56.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.312 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:29:56.312 00:29:56.312 --- 10.0.0.1 ping statistics --- 00:29:56.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.312 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1479522 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1479522 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1479522 ']' 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:56.312 07:30:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:56.312 [2024-11-20 07:30:30.916629] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:29:56.312 [2024-11-20 07:30:30.916698] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.312 [2024-11-20 07:30:31.027927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:56.623 [2024-11-20 07:30:31.079048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.623 [2024-11-20 07:30:31.079105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.623 [2024-11-20 07:30:31.079114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.623 [2024-11-20 07:30:31.079121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.623 [2024-11-20 07:30:31.079127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.623 [2024-11-20 07:30:31.080988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.623 [2024-11-20 07:30:31.081280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:56.623 [2024-11-20 07:30:31.081281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.210 [2024-11-20 07:30:31.785609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.210 Malloc0 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.210 [2024-11-20 07:30:31.854938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:57.210 { 00:29:57.210 "params": { 00:29:57.210 "name": "Nvme$subsystem", 00:29:57.210 "trtype": "$TEST_TRANSPORT", 00:29:57.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:57.210 "adrfam": "ipv4", 00:29:57.210 "trsvcid": "$NVMF_PORT", 00:29:57.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:57.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:57.210 "hdgst": ${hdgst:-false}, 00:29:57.210 "ddgst": ${ddgst:-false} 00:29:57.210 }, 00:29:57.210 "method": "bdev_nvme_attach_controller" 00:29:57.210 } 00:29:57.210 EOF 00:29:57.210 )") 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:57.210 07:30:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:57.210 "params": { 00:29:57.210 "name": "Nvme1", 00:29:57.210 "trtype": "tcp", 00:29:57.210 "traddr": "10.0.0.2", 00:29:57.210 "adrfam": "ipv4", 00:29:57.210 "trsvcid": "4420", 00:29:57.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:57.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:57.210 "hdgst": false, 00:29:57.210 "ddgst": false 00:29:57.210 }, 00:29:57.210 "method": "bdev_nvme_attach_controller" 00:29:57.210 }' 00:29:57.210 [2024-11-20 07:30:31.912554] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:29:57.210 [2024-11-20 07:30:31.912613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479824 ] 00:29:57.470 [2024-11-20 07:30:31.990897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.471 [2024-11-20 07:30:32.027221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.471 Running I/O for 1 seconds... 00:29:58.855 8963.00 IOPS, 35.01 MiB/s 00:29:58.855 Latency(us) 00:29:58.855 [2024-11-20T06:30:33.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.856 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:58.856 Verification LBA range: start 0x0 length 0x4000 00:29:58.856 Nvme1n1 : 1.01 9014.16 35.21 0.00 0.00 14140.16 1761.28 16602.45 00:29:58.856 [2024-11-20T06:30:33.623Z] =================================================================================================================== 00:29:58.856 [2024-11-20T06:30:33.623Z] Total : 9014.16 35.21 0.00 0.00 14140.16 1761.28 16602.45 00:29:58.856 07:30:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1480030 00:29:58.856 07:30:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:58.856 07:30:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:58.856 07:30:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:58.856 07:30:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:58.856 07:30:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:58.856 07:30:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:58.856 07:30:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:58.856 { 00:29:58.856 "params": { 00:29:58.856 "name": "Nvme$subsystem", 00:29:58.856 "trtype": "$TEST_TRANSPORT", 00:29:58.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.856 "adrfam": "ipv4", 00:29:58.856 "trsvcid": "$NVMF_PORT", 00:29:58.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.856 "hdgst": ${hdgst:-false}, 00:29:58.856 "ddgst": ${ddgst:-false} 00:29:58.856 }, 00:29:58.856 "method": "bdev_nvme_attach_controller" 00:29:58.856 } 00:29:58.856 EOF 00:29:58.856 )") 00:29:58.856 07:30:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:58.856 07:30:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:58.856 07:30:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:58.856 07:30:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:58.856 "params": { 00:29:58.856 "name": "Nvme1", 00:29:58.856 "trtype": "tcp", 00:29:58.856 "traddr": "10.0.0.2", 00:29:58.856 "adrfam": "ipv4", 00:29:58.856 "trsvcid": "4420", 00:29:58.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:58.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:58.856 "hdgst": false, 00:29:58.856 "ddgst": false 00:29:58.856 }, 00:29:58.856 "method": "bdev_nvme_attach_controller" 00:29:58.856 }' 00:29:58.856 [2024-11-20 07:30:33.396003] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:29:58.856 [2024-11-20 07:30:33.396057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480030 ] 00:29:58.856 [2024-11-20 07:30:33.474730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.856 [2024-11-20 07:30:33.509897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.117 Running I/O for 15 seconds... 00:30:01.001 11111.00 IOPS, 43.40 MiB/s [2024-11-20T06:30:36.712Z] 11041.50 IOPS, 43.13 MiB/s [2024-11-20T06:30:36.712Z] 07:30:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1479522 00:30:01.945 07:30:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:01.945 [2024-11-20 07:30:36.362385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.945 [2024-11-20 07:30:36.362989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.945 [2024-11-20 07:30:36.362996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.946 [2024-11-20 07:30:36.363658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.946 [2024-11-20 07:30:36.363666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.363986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.363993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.947 [2024-11-20 07:30:36.364336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.947 [2024-11-20 07:30:36.364345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.948 [2024-11-20 07:30:36.364661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.364670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e950 is same with the state(6) to be set 00:30:01.948 [2024-11-20 07:30:36.364680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:01.948 [2024-11-20 07:30:36.364686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:01.948 [2024-11-20 07:30:36.364693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103744 len:8 PRP1 0x0 PRP2 0x0 00:30:01.948 [2024-11-20 07:30:36.364701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.948 [2024-11-20 07:30:36.368295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.948 [2024-11-20 07:30:36.368347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.948 [2024-11-20 07:30:36.369255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.948 [2024-11-20 07:30:36.369293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.948 [2024-11-20 07:30:36.369305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.948 [2024-11-20 07:30:36.369546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.948 [2024-11-20 07:30:36.369771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.948 [2024-11-20 07:30:36.369780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.948 [2024-11-20 07:30:36.369789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.948 [2024-11-20 07:30:36.369799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.948 [2024-11-20 07:30:36.382387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.948 [2024-11-20 07:30:36.382911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.948 [2024-11-20 07:30:36.382937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.948 [2024-11-20 07:30:36.382946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.948 [2024-11-20 07:30:36.383172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.948 [2024-11-20 07:30:36.383393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.948 [2024-11-20 07:30:36.383401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.948 [2024-11-20 07:30:36.383409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.948 [2024-11-20 07:30:36.383416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.948 [2024-11-20 07:30:36.396228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.948 [2024-11-20 07:30:36.396923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.948 [2024-11-20 07:30:36.396960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.948 [2024-11-20 07:30:36.396981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.948 [2024-11-20 07:30:36.397223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.948 [2024-11-20 07:30:36.397447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.948 [2024-11-20 07:30:36.397456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.948 [2024-11-20 07:30:36.397464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.948 [2024-11-20 07:30:36.397472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.948 [2024-11-20 07:30:36.410096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.948 [2024-11-20 07:30:36.410771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.948 [2024-11-20 07:30:36.410809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.948 [2024-11-20 07:30:36.410820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.948 [2024-11-20 07:30:36.411069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.948 [2024-11-20 07:30:36.411294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.948 [2024-11-20 07:30:36.411304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.948 [2024-11-20 07:30:36.411312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.948 [2024-11-20 07:30:36.411320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.949 [2024-11-20 07:30:36.423917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.949 [2024-11-20 07:30:36.424603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.949 [2024-11-20 07:30:36.424641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.949 [2024-11-20 07:30:36.424651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.949 [2024-11-20 07:30:36.424898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.949 [2024-11-20 07:30:36.425123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.949 [2024-11-20 07:30:36.425132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.949 [2024-11-20 07:30:36.425140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.949 [2024-11-20 07:30:36.425147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.949 [2024-11-20 07:30:36.437738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.949 [2024-11-20 07:30:36.438315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.949 [2024-11-20 07:30:36.438335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.949 [2024-11-20 07:30:36.438343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.949 [2024-11-20 07:30:36.438562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.949 [2024-11-20 07:30:36.438787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.949 [2024-11-20 07:30:36.438795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.949 [2024-11-20 07:30:36.438802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.949 [2024-11-20 07:30:36.438810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.949 [2024-11-20 07:30:36.451621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.949 [2024-11-20 07:30:36.452265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.949 [2024-11-20 07:30:36.452302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.949 [2024-11-20 07:30:36.452313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.949 [2024-11-20 07:30:36.452552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.949 [2024-11-20 07:30:36.452776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.949 [2024-11-20 07:30:36.452784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.949 [2024-11-20 07:30:36.452793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.949 [2024-11-20 07:30:36.452801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.949 [2024-11-20 07:30:36.465611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.949 [2024-11-20 07:30:36.466258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.949 [2024-11-20 07:30:36.466296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.949 [2024-11-20 07:30:36.466307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.949 [2024-11-20 07:30:36.466546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.949 [2024-11-20 07:30:36.466769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.949 [2024-11-20 07:30:36.466778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.949 [2024-11-20 07:30:36.466786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.949 [2024-11-20 07:30:36.466794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.949 [2024-11-20 07:30:36.479604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.949 [2024-11-20 07:30:36.480286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.949 [2024-11-20 07:30:36.480323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.949 [2024-11-20 07:30:36.480335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.949 [2024-11-20 07:30:36.480574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.949 [2024-11-20 07:30:36.480797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.949 [2024-11-20 07:30:36.480806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.949 [2024-11-20 07:30:36.480813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.949 [2024-11-20 07:30:36.480825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.949 [2024-11-20 07:30:36.493422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.949 [2024-11-20 07:30:36.494080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.949 [2024-11-20 07:30:36.494118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.949 [2024-11-20 07:30:36.494129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.949 [2024-11-20 07:30:36.494367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.949 [2024-11-20 07:30:36.494591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.949 [2024-11-20 07:30:36.494600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.949 [2024-11-20 07:30:36.494608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.949 [2024-11-20 07:30:36.494616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.949 [2024-11-20 07:30:36.507420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.949 [2024-11-20 07:30:36.507985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.949 [2024-11-20 07:30:36.508005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.949 [2024-11-20 07:30:36.508013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.949 [2024-11-20 07:30:36.508233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.949 [2024-11-20 07:30:36.508452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.949 [2024-11-20 07:30:36.508461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.949 [2024-11-20 07:30:36.508468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.949 [2024-11-20 07:30:36.508475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.949 [2024-11-20 07:30:36.521255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.949 [2024-11-20 07:30:36.521922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.949 [2024-11-20 07:30:36.521959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.949 [2024-11-20 07:30:36.521970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.949 [2024-11-20 07:30:36.522209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.949 [2024-11-20 07:30:36.522432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.949 [2024-11-20 07:30:36.522441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.949 [2024-11-20 07:30:36.522450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.949 [2024-11-20 07:30:36.522458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.949 [2024-11-20 07:30:36.535072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.949 [2024-11-20 07:30:36.535710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.950 [2024-11-20 07:30:36.535747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.950 [2024-11-20 07:30:36.535758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.950 [2024-11-20 07:30:36.536006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.950 [2024-11-20 07:30:36.536231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.950 [2024-11-20 07:30:36.536240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.950 [2024-11-20 07:30:36.536248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.950 [2024-11-20 07:30:36.536256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.950 [2024-11-20 07:30:36.549068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.950 [2024-11-20 07:30:36.549652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.950 [2024-11-20 07:30:36.549672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.950 [2024-11-20 07:30:36.549680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.950 [2024-11-20 07:30:36.549908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.950 [2024-11-20 07:30:36.550128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.950 [2024-11-20 07:30:36.550136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.950 [2024-11-20 07:30:36.550143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.950 [2024-11-20 07:30:36.550150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.950 [2024-11-20 07:30:36.562952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.950 [2024-11-20 07:30:36.563477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.950 [2024-11-20 07:30:36.563494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.950 [2024-11-20 07:30:36.563501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.950 [2024-11-20 07:30:36.563720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.950 [2024-11-20 07:30:36.563945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.950 [2024-11-20 07:30:36.563953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.950 [2024-11-20 07:30:36.563960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.950 [2024-11-20 07:30:36.563967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.950 [2024-11-20 07:30:36.576769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.950 [2024-11-20 07:30:36.577395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.950 [2024-11-20 07:30:36.577432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.950 [2024-11-20 07:30:36.577448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.950 [2024-11-20 07:30:36.577687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.950 [2024-11-20 07:30:36.577921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.950 [2024-11-20 07:30:36.577932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.950 [2024-11-20 07:30:36.577940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.950 [2024-11-20 07:30:36.577948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.950 [2024-11-20 07:30:36.590753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.950 [2024-11-20 07:30:36.591307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.950 [2024-11-20 07:30:36.591328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.950 [2024-11-20 07:30:36.591335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.950 [2024-11-20 07:30:36.591555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.950 [2024-11-20 07:30:36.591774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.950 [2024-11-20 07:30:36.591782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.950 [2024-11-20 07:30:36.591789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.950 [2024-11-20 07:30:36.591795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.950 [2024-11-20 07:30:36.604606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.950 [2024-11-20 07:30:36.605232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.950 [2024-11-20 07:30:36.605270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.950 [2024-11-20 07:30:36.605281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.950 [2024-11-20 07:30:36.605520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.950 [2024-11-20 07:30:36.605743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.950 [2024-11-20 07:30:36.605752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.950 [2024-11-20 07:30:36.605760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.950 [2024-11-20 07:30:36.605768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.950 [2024-11-20 07:30:36.618591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.950 [2024-11-20 07:30:36.619253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.950 [2024-11-20 07:30:36.619291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.950 [2024-11-20 07:30:36.619303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.950 [2024-11-20 07:30:36.619546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.950 [2024-11-20 07:30:36.619770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.950 [2024-11-20 07:30:36.619783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.950 [2024-11-20 07:30:36.619791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.950 [2024-11-20 07:30:36.619799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.950 [2024-11-20 07:30:36.632407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.950 [2024-11-20 07:30:36.633094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.950 [2024-11-20 07:30:36.633132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.950 [2024-11-20 07:30:36.633143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.950 [2024-11-20 07:30:36.633382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.950 [2024-11-20 07:30:36.633606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.950 [2024-11-20 07:30:36.633617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.950 [2024-11-20 07:30:36.633624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.950 [2024-11-20 07:30:36.633633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.950 [2024-11-20 07:30:36.646264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.950 [2024-11-20 07:30:36.646776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.950 [2024-11-20 07:30:36.646795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.950 [2024-11-20 07:30:36.646803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.950 [2024-11-20 07:30:36.647030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.950 [2024-11-20 07:30:36.647251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.950 [2024-11-20 07:30:36.647261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.950 [2024-11-20 07:30:36.647268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.950 [2024-11-20 07:30:36.647275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.950 [2024-11-20 07:30:36.660097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.950 [2024-11-20 07:30:36.660763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.950 [2024-11-20 07:30:36.660801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.950 [2024-11-20 07:30:36.660812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.950 [2024-11-20 07:30:36.661059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.950 [2024-11-20 07:30:36.661284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.950 [2024-11-20 07:30:36.661292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.950 [2024-11-20 07:30:36.661300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.950 [2024-11-20 07:30:36.661312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.950 [2024-11-20 07:30:36.673919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.951 [2024-11-20 07:30:36.674573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.951 [2024-11-20 07:30:36.674611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.951 [2024-11-20 07:30:36.674622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.951 [2024-11-20 07:30:36.674868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.951 [2024-11-20 07:30:36.675093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.951 [2024-11-20 07:30:36.675102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.951 [2024-11-20 07:30:36.675109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.951 [2024-11-20 07:30:36.675117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.951 [2024-11-20 07:30:36.687925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.951 [2024-11-20 07:30:36.688515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.951 [2024-11-20 07:30:36.688534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.951 [2024-11-20 07:30:36.688542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.951 [2024-11-20 07:30:36.688762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.951 [2024-11-20 07:30:36.688989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.951 [2024-11-20 07:30:36.688998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.951 [2024-11-20 07:30:36.689005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.951 [2024-11-20 07:30:36.689012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.951 [2024-11-20 07:30:36.701965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.951 [2024-11-20 07:30:36.702630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.951 [2024-11-20 07:30:36.702668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:01.951 [2024-11-20 07:30:36.702679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:01.951 [2024-11-20 07:30:36.702927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:01.951 [2024-11-20 07:30:36.703151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.951 [2024-11-20 07:30:36.703161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.951 [2024-11-20 07:30:36.703168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.951 [2024-11-20 07:30:36.703176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.212 9741.67 IOPS, 38.05 MiB/s [2024-11-20T06:30:36.979Z] [2024-11-20 07:30:36.717455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.213 [2024-11-20 07:30:36.718077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.213 [2024-11-20 07:30:36.718115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.213 [2024-11-20 07:30:36.718126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.213 [2024-11-20 07:30:36.718365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.213 [2024-11-20 07:30:36.718588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.213 [2024-11-20 07:30:36.718597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.213 [2024-11-20 07:30:36.718605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.213 [2024-11-20 07:30:36.718613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.213 [2024-11-20 07:30:36.731441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.213 [2024-11-20 07:30:36.732165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.213 [2024-11-20 07:30:36.732187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.213 [2024-11-20 07:30:36.732195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.213 [2024-11-20 07:30:36.732420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.213 [2024-11-20 07:30:36.732640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.213 [2024-11-20 07:30:36.732648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.213 [2024-11-20 07:30:36.732655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.213 [2024-11-20 07:30:36.732662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.213 [2024-11-20 07:30:36.745265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.213 [2024-11-20 07:30:36.745715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.213 [2024-11-20 07:30:36.745733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.213 [2024-11-20 07:30:36.745742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.213 [2024-11-20 07:30:36.745967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.213 [2024-11-20 07:30:36.746188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.213 [2024-11-20 07:30:36.746196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.213 [2024-11-20 07:30:36.746204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.213 [2024-11-20 07:30:36.746210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.213 [2024-11-20 07:30:36.759225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.213 [2024-11-20 07:30:36.759753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.213 [2024-11-20 07:30:36.759769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.213 [2024-11-20 07:30:36.759781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.213 [2024-11-20 07:30:36.760006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.213 [2024-11-20 07:30:36.760226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.213 [2024-11-20 07:30:36.760234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.213 [2024-11-20 07:30:36.760241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.213 [2024-11-20 07:30:36.760248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.213 [2024-11-20 07:30:36.773040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.213 [2024-11-20 07:30:36.773609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.213 [2024-11-20 07:30:36.773625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.213 [2024-11-20 07:30:36.773632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.213 [2024-11-20 07:30:36.773851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.213 [2024-11-20 07:30:36.774077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.213 [2024-11-20 07:30:36.774086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.213 [2024-11-20 07:30:36.774093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.213 [2024-11-20 07:30:36.774099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.213 [2024-11-20 07:30:36.786910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.213 [2024-11-20 07:30:36.787528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.213 [2024-11-20 07:30:36.787566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.213 [2024-11-20 07:30:36.787577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.213 [2024-11-20 07:30:36.787816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.213 [2024-11-20 07:30:36.788048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.213 [2024-11-20 07:30:36.788058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.213 [2024-11-20 07:30:36.788066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.213 [2024-11-20 07:30:36.788074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.213 [2024-11-20 07:30:36.800898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.213 [2024-11-20 07:30:36.801489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.213 [2024-11-20 07:30:36.801508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.213 [2024-11-20 07:30:36.801516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.213 [2024-11-20 07:30:36.801735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.213 [2024-11-20 07:30:36.801967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.213 [2024-11-20 07:30:36.801977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.213 [2024-11-20 07:30:36.801984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.213 [2024-11-20 07:30:36.801990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.213 [2024-11-20 07:30:36.815012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.213 [2024-11-20 07:30:36.815598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.213 [2024-11-20 07:30:36.815615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.213 [2024-11-20 07:30:36.815622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.213 [2024-11-20 07:30:36.815842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.213 [2024-11-20 07:30:36.816068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.213 [2024-11-20 07:30:36.816077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.213 [2024-11-20 07:30:36.816084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.213 [2024-11-20 07:30:36.816091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.213 [2024-11-20 07:30:36.828897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.213 [2024-11-20 07:30:36.829465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.213 [2024-11-20 07:30:36.829482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.213 [2024-11-20 07:30:36.829489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.213 [2024-11-20 07:30:36.829708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.213 [2024-11-20 07:30:36.829934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.213 [2024-11-20 07:30:36.829943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.213 [2024-11-20 07:30:36.829950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.213 [2024-11-20 07:30:36.829956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.213 [2024-11-20 07:30:36.842751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.213 [2024-11-20 07:30:36.843409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.213 [2024-11-20 07:30:36.843446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.213 [2024-11-20 07:30:36.843457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.213 [2024-11-20 07:30:36.843696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.213 [2024-11-20 07:30:36.843929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.213 [2024-11-20 07:30:36.843940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.214 [2024-11-20 07:30:36.843953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.214 [2024-11-20 07:30:36.843961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.214 [2024-11-20 07:30:36.856568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.214 [2024-11-20 07:30:36.857145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.214 [2024-11-20 07:30:36.857165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.214 [2024-11-20 07:30:36.857173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.214 [2024-11-20 07:30:36.857393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.214 [2024-11-20 07:30:36.857612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.214 [2024-11-20 07:30:36.857620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.214 [2024-11-20 07:30:36.857628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.214 [2024-11-20 07:30:36.857635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.214 [2024-11-20 07:30:36.870443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.214 [2024-11-20 07:30:36.871173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.214 [2024-11-20 07:30:36.871211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.214 [2024-11-20 07:30:36.871222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.214 [2024-11-20 07:30:36.871461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.214 [2024-11-20 07:30:36.871685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.214 [2024-11-20 07:30:36.871695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.214 [2024-11-20 07:30:36.871703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.214 [2024-11-20 07:30:36.871712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.214 [2024-11-20 07:30:36.884309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.214 [2024-11-20 07:30:36.884913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.214 [2024-11-20 07:30:36.884951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.214 [2024-11-20 07:30:36.884962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.214 [2024-11-20 07:30:36.885201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.214 [2024-11-20 07:30:36.885424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.214 [2024-11-20 07:30:36.885433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.214 [2024-11-20 07:30:36.885441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.214 [2024-11-20 07:30:36.885449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.214 [2024-11-20 07:30:36.898256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.214 [2024-11-20 07:30:36.898944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.214 [2024-11-20 07:30:36.898982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.214 [2024-11-20 07:30:36.898994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.214 [2024-11-20 07:30:36.899234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.214 [2024-11-20 07:30:36.899457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.214 [2024-11-20 07:30:36.899466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.214 [2024-11-20 07:30:36.899474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.214 [2024-11-20 07:30:36.899482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.214 [2024-11-20 07:30:36.912103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.214 [2024-11-20 07:30:36.912778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.214 [2024-11-20 07:30:36.912816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.214 [2024-11-20 07:30:36.912827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.214 [2024-11-20 07:30:36.913073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.214 [2024-11-20 07:30:36.913298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.214 [2024-11-20 07:30:36.913307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.214 [2024-11-20 07:30:36.913315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.214 [2024-11-20 07:30:36.913323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.214 [2024-11-20 07:30:36.926114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.214 [2024-11-20 07:30:36.926657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.214 [2024-11-20 07:30:36.926677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.214 [2024-11-20 07:30:36.926684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.214 [2024-11-20 07:30:36.926911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.214 [2024-11-20 07:30:36.927132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.214 [2024-11-20 07:30:36.927140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.214 [2024-11-20 07:30:36.927147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.214 [2024-11-20 07:30:36.927154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.214 [2024-11-20 07:30:36.939934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.214 [2024-11-20 07:30:36.940481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.214 [2024-11-20 07:30:36.940498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.214 [2024-11-20 07:30:36.940510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.214 [2024-11-20 07:30:36.940729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.214 [2024-11-20 07:30:36.940955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.214 [2024-11-20 07:30:36.940964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.214 [2024-11-20 07:30:36.940972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.214 [2024-11-20 07:30:36.940978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.214 [2024-11-20 07:30:36.953748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.214 [2024-11-20 07:30:36.954356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.214 [2024-11-20 07:30:36.954373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.214 [2024-11-20 07:30:36.954380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.214 [2024-11-20 07:30:36.954599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.214 [2024-11-20 07:30:36.954819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.214 [2024-11-20 07:30:36.954827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.214 [2024-11-20 07:30:36.954834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.214 [2024-11-20 07:30:36.954840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.214 [2024-11-20 07:30:36.967656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.214 [2024-11-20 07:30:36.968309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.214 [2024-11-20 07:30:36.968346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.214 [2024-11-20 07:30:36.968357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.214 [2024-11-20 07:30:36.968596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.214 [2024-11-20 07:30:36.968820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.214 [2024-11-20 07:30:36.968829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.214 [2024-11-20 07:30:36.968837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.214 [2024-11-20 07:30:36.968845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.477 [2024-11-20 07:30:36.981648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.477 [2024-11-20 07:30:36.982284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.477 [2024-11-20 07:30:36.982323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.477 [2024-11-20 07:30:36.982333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.477 [2024-11-20 07:30:36.982572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.477 [2024-11-20 07:30:36.982802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.477 [2024-11-20 07:30:36.982811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.477 [2024-11-20 07:30:36.982819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.477 [2024-11-20 07:30:36.982827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.477 [2024-11-20 07:30:36.995626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.477 [2024-11-20 07:30:36.996311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.477 [2024-11-20 07:30:36.996348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.477 [2024-11-20 07:30:36.996359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.477 [2024-11-20 07:30:36.996598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.477 [2024-11-20 07:30:36.996821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.477 [2024-11-20 07:30:36.996830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.477 [2024-11-20 07:30:36.996838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.477 [2024-11-20 07:30:36.996846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.477 [2024-11-20 07:30:37.009453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.477 [2024-11-20 07:30:37.010017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.477 [2024-11-20 07:30:37.010037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.477 [2024-11-20 07:30:37.010045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.477 [2024-11-20 07:30:37.010264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.478 [2024-11-20 07:30:37.010484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.478 [2024-11-20 07:30:37.010492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.478 [2024-11-20 07:30:37.010500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.478 [2024-11-20 07:30:37.010506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.478 [2024-11-20 07:30:37.023305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.478 [2024-11-20 07:30:37.023973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.478 [2024-11-20 07:30:37.024011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.478 [2024-11-20 07:30:37.024023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.478 [2024-11-20 07:30:37.024263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.478 [2024-11-20 07:30:37.024487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.478 [2024-11-20 07:30:37.024496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.478 [2024-11-20 07:30:37.024508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.478 [2024-11-20 07:30:37.024516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.478 [2024-11-20 07:30:37.037109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.478 [2024-11-20 07:30:37.037743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.478 [2024-11-20 07:30:37.037781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.478 [2024-11-20 07:30:37.037793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.478 [2024-11-20 07:30:37.038040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.478 [2024-11-20 07:30:37.038265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.478 [2024-11-20 07:30:37.038274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.478 [2024-11-20 07:30:37.038282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.478 [2024-11-20 07:30:37.038290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.478 [2024-11-20 07:30:37.051084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.478 [2024-11-20 07:30:37.051630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.478 [2024-11-20 07:30:37.051650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.478 [2024-11-20 07:30:37.051658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.478 [2024-11-20 07:30:37.051883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.478 [2024-11-20 07:30:37.052104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.478 [2024-11-20 07:30:37.052113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.478 [2024-11-20 07:30:37.052120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.478 [2024-11-20 07:30:37.052126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.478 [2024-11-20 07:30:37.064903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.478 [2024-11-20 07:30:37.065435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.478 [2024-11-20 07:30:37.065452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.478 [2024-11-20 07:30:37.065459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.478 [2024-11-20 07:30:37.065678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.478 [2024-11-20 07:30:37.065903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.478 [2024-11-20 07:30:37.065912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.478 [2024-11-20 07:30:37.065919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.478 [2024-11-20 07:30:37.065925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.478 [2024-11-20 07:30:37.078716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.478 [2024-11-20 07:30:37.079371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.478 [2024-11-20 07:30:37.079409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.478 [2024-11-20 07:30:37.079420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.478 [2024-11-20 07:30:37.079659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.478 [2024-11-20 07:30:37.079891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.478 [2024-11-20 07:30:37.079901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.478 [2024-11-20 07:30:37.079908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.478 [2024-11-20 07:30:37.079916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.478 [2024-11-20 07:30:37.092713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.478 [2024-11-20 07:30:37.093376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.478 [2024-11-20 07:30:37.093414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.478 [2024-11-20 07:30:37.093425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.478 [2024-11-20 07:30:37.093663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.478 [2024-11-20 07:30:37.093896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.478 [2024-11-20 07:30:37.093906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.478 [2024-11-20 07:30:37.093913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.478 [2024-11-20 07:30:37.093921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.478 [2024-11-20 07:30:37.106738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.478 [2024-11-20 07:30:37.107401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.478 [2024-11-20 07:30:37.107439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.478 [2024-11-20 07:30:37.107449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.478 [2024-11-20 07:30:37.107688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.478 [2024-11-20 07:30:37.107919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.478 [2024-11-20 07:30:37.107929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.478 [2024-11-20 07:30:37.107937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.478 [2024-11-20 07:30:37.107945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.478 [2024-11-20 07:30:37.120742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.478 [2024-11-20 07:30:37.121275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.478 [2024-11-20 07:30:37.121295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.478 [2024-11-20 07:30:37.121307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.478 [2024-11-20 07:30:37.121527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.478 [2024-11-20 07:30:37.121746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.478 [2024-11-20 07:30:37.121756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.478 [2024-11-20 07:30:37.121763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.478 [2024-11-20 07:30:37.121769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.478 [2024-11-20 07:30:37.134559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.478 [2024-11-20 07:30:37.135166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.478 [2024-11-20 07:30:37.135204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.478 [2024-11-20 07:30:37.135215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.478 [2024-11-20 07:30:37.135454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.478 [2024-11-20 07:30:37.135678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.478 [2024-11-20 07:30:37.135687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.478 [2024-11-20 07:30:37.135694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.478 [2024-11-20 07:30:37.135702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.478 [2024-11-20 07:30:37.148507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.478 [2024-11-20 07:30:37.149178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.479 [2024-11-20 07:30:37.149216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.479 [2024-11-20 07:30:37.149227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.479 [2024-11-20 07:30:37.149466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.479 [2024-11-20 07:30:37.149689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.479 [2024-11-20 07:30:37.149699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.479 [2024-11-20 07:30:37.149706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.479 [2024-11-20 07:30:37.149714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.479 [2024-11-20 07:30:37.162517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.479 [2024-11-20 07:30:37.163172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.479 [2024-11-20 07:30:37.163210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.479 [2024-11-20 07:30:37.163221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.479 [2024-11-20 07:30:37.163459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.479 [2024-11-20 07:30:37.163688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.479 [2024-11-20 07:30:37.163697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.479 [2024-11-20 07:30:37.163704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.479 [2024-11-20 07:30:37.163712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.479 [2024-11-20 07:30:37.176519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.479 [2024-11-20 07:30:37.177067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.479 [2024-11-20 07:30:37.177106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.479 [2024-11-20 07:30:37.177117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.479 [2024-11-20 07:30:37.177355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.479 [2024-11-20 07:30:37.177578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.479 [2024-11-20 07:30:37.177587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.479 [2024-11-20 07:30:37.177595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.479 [2024-11-20 07:30:37.177603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.479 [2024-11-20 07:30:37.190402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.479 [2024-11-20 07:30:37.190964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.479 [2024-11-20 07:30:37.190983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.479 [2024-11-20 07:30:37.190991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.479 [2024-11-20 07:30:37.191211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.479 [2024-11-20 07:30:37.191430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.479 [2024-11-20 07:30:37.191437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.479 [2024-11-20 07:30:37.191445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.479 [2024-11-20 07:30:37.191452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.479 [2024-11-20 07:30:37.204260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.479 [2024-11-20 07:30:37.204799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.479 [2024-11-20 07:30:37.204817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.479 [2024-11-20 07:30:37.204824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.479 [2024-11-20 07:30:37.205049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.479 [2024-11-20 07:30:37.205269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.479 [2024-11-20 07:30:37.205277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.479 [2024-11-20 07:30:37.205293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.479 [2024-11-20 07:30:37.205300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.479 [2024-11-20 07:30:37.218097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.479 [2024-11-20 07:30:37.218491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.479 [2024-11-20 07:30:37.218509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.479 [2024-11-20 07:30:37.218516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.479 [2024-11-20 07:30:37.218736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.479 [2024-11-20 07:30:37.218967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.479 [2024-11-20 07:30:37.218978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.479 [2024-11-20 07:30:37.218985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.479 [2024-11-20 07:30:37.218992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.479 [2024-11-20 07:30:37.231981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.479 [2024-11-20 07:30:37.232512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.479 [2024-11-20 07:30:37.232528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.479 [2024-11-20 07:30:37.232536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.479 [2024-11-20 07:30:37.232755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.479 [2024-11-20 07:30:37.232981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.479 [2024-11-20 07:30:37.232990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.479 [2024-11-20 07:30:37.232998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.479 [2024-11-20 07:30:37.233004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.741 [2024-11-20 07:30:37.245784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.741 [2024-11-20 07:30:37.246446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.741 [2024-11-20 07:30:37.246484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.741 [2024-11-20 07:30:37.246495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.741 [2024-11-20 07:30:37.246734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.741 [2024-11-20 07:30:37.246965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.741 [2024-11-20 07:30:37.246976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.741 [2024-11-20 07:30:37.246985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.741 [2024-11-20 07:30:37.246993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.741 [2024-11-20 07:30:37.259787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.741 [2024-11-20 07:30:37.260387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.741 [2024-11-20 07:30:37.260406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.741 [2024-11-20 07:30:37.260414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.741 [2024-11-20 07:30:37.260633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.741 [2024-11-20 07:30:37.260852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.741 [2024-11-20 07:30:37.260860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.741 [2024-11-20 07:30:37.260873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.741 [2024-11-20 07:30:37.260880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.741 [2024-11-20 07:30:37.273661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.741 [2024-11-20 07:30:37.274206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.741 [2024-11-20 07:30:37.274223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.741 [2024-11-20 07:30:37.274231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.741 [2024-11-20 07:30:37.274450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.741 [2024-11-20 07:30:37.274669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.741 [2024-11-20 07:30:37.274678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.741 [2024-11-20 07:30:37.274685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.741 [2024-11-20 07:30:37.274691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.741 [2024-11-20 07:30:37.287480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.741 [2024-11-20 07:30:37.288103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.741 [2024-11-20 07:30:37.288141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.741 [2024-11-20 07:30:37.288152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.741 [2024-11-20 07:30:37.288391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.741 [2024-11-20 07:30:37.288615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.741 [2024-11-20 07:30:37.288624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.741 [2024-11-20 07:30:37.288631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.741 [2024-11-20 07:30:37.288639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.741 [2024-11-20 07:30:37.301449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.741 [2024-11-20 07:30:37.301981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.741 [2024-11-20 07:30:37.302019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.741 [2024-11-20 07:30:37.302036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.741 [2024-11-20 07:30:37.302278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.741 [2024-11-20 07:30:37.302502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.741 [2024-11-20 07:30:37.302511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.741 [2024-11-20 07:30:37.302519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.741 [2024-11-20 07:30:37.302527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.741 [2024-11-20 07:30:37.315334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.741 [2024-11-20 07:30:37.315884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.741 [2024-11-20 07:30:37.315922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.741 [2024-11-20 07:30:37.315934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.741 [2024-11-20 07:30:37.316176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.741 [2024-11-20 07:30:37.316399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.741 [2024-11-20 07:30:37.316408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.741 [2024-11-20 07:30:37.316416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.741 [2024-11-20 07:30:37.316424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.741 [2024-11-20 07:30:37.329213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.741 [2024-11-20 07:30:37.329924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.741 [2024-11-20 07:30:37.329961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.741 [2024-11-20 07:30:37.329972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.741 [2024-11-20 07:30:37.330210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.741 [2024-11-20 07:30:37.330433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.741 [2024-11-20 07:30:37.330442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.741 [2024-11-20 07:30:37.330450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.741 [2024-11-20 07:30:37.330458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.741 [2024-11-20 07:30:37.343045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.741 [2024-11-20 07:30:37.343709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.741 [2024-11-20 07:30:37.343746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.741 [2024-11-20 07:30:37.343757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.742 [2024-11-20 07:30:37.344005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.742 [2024-11-20 07:30:37.344234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.742 [2024-11-20 07:30:37.344243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.742 [2024-11-20 07:30:37.344250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.742 [2024-11-20 07:30:37.344258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.742 [2024-11-20 07:30:37.357055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.742 [2024-11-20 07:30:37.357647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.742 [2024-11-20 07:30:37.357666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.742 [2024-11-20 07:30:37.357674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.742 [2024-11-20 07:30:37.357900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.742 [2024-11-20 07:30:37.358121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.742 [2024-11-20 07:30:37.358129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.742 [2024-11-20 07:30:37.358136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.742 [2024-11-20 07:30:37.358143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.742 [2024-11-20 07:30:37.370924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.742 [2024-11-20 07:30:37.371454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.742 [2024-11-20 07:30:37.371491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.742 [2024-11-20 07:30:37.371503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.742 [2024-11-20 07:30:37.371744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.742 [2024-11-20 07:30:37.371978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.742 [2024-11-20 07:30:37.371988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.742 [2024-11-20 07:30:37.371995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.742 [2024-11-20 07:30:37.372003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.742 [2024-11-20 07:30:37.384799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.742 [2024-11-20 07:30:37.385413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.742 [2024-11-20 07:30:37.385450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.742 [2024-11-20 07:30:37.385461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.742 [2024-11-20 07:30:37.385700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.742 [2024-11-20 07:30:37.385933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.742 [2024-11-20 07:30:37.385943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.742 [2024-11-20 07:30:37.385952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.742 [2024-11-20 07:30:37.385964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.742 [2024-11-20 07:30:37.398643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.742 [2024-11-20 07:30:37.399349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.742 [2024-11-20 07:30:37.399386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.742 [2024-11-20 07:30:37.399397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.742 [2024-11-20 07:30:37.399636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.742 [2024-11-20 07:30:37.399860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.742 [2024-11-20 07:30:37.399886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.742 [2024-11-20 07:30:37.399894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.742 [2024-11-20 07:30:37.399903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.742 [2024-11-20 07:30:37.412486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.742 [2024-11-20 07:30:37.412918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.742 [2024-11-20 07:30:37.412944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.742 [2024-11-20 07:30:37.412952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.742 [2024-11-20 07:30:37.413177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.742 [2024-11-20 07:30:37.413398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.742 [2024-11-20 07:30:37.413406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.742 [2024-11-20 07:30:37.413413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.742 [2024-11-20 07:30:37.413420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.742 [2024-11-20 07:30:37.426422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.742 [2024-11-20 07:30:37.426963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.742 [2024-11-20 07:30:37.427001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.742 [2024-11-20 07:30:37.427013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.742 [2024-11-20 07:30:37.427253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.742 [2024-11-20 07:30:37.427477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.742 [2024-11-20 07:30:37.427486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.742 [2024-11-20 07:30:37.427494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.742 [2024-11-20 07:30:37.427502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.742 [2024-11-20 07:30:37.440301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.742 [2024-11-20 07:30:37.440898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.742 [2024-11-20 07:30:37.440918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.742 [2024-11-20 07:30:37.440926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.742 [2024-11-20 07:30:37.441146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.742 [2024-11-20 07:30:37.441365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.742 [2024-11-20 07:30:37.441373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.742 [2024-11-20 07:30:37.441380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.742 [2024-11-20 07:30:37.441387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.742 [2024-11-20 07:30:37.454174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.742 [2024-11-20 07:30:37.454827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.742 [2024-11-20 07:30:37.454871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.742 [2024-11-20 07:30:37.454883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.742 [2024-11-20 07:30:37.455122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.742 [2024-11-20 07:30:37.455345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.742 [2024-11-20 07:30:37.455354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.742 [2024-11-20 07:30:37.455363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.742 [2024-11-20 07:30:37.455371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.742 [2024-11-20 07:30:37.468161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.742 [2024-11-20 07:30:37.468842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.742 [2024-11-20 07:30:37.468887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.742 [2024-11-20 07:30:37.468899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.742 [2024-11-20 07:30:37.469137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.742 [2024-11-20 07:30:37.469360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.742 [2024-11-20 07:30:37.469369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.742 [2024-11-20 07:30:37.469377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.742 [2024-11-20 07:30:37.469385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.742 [2024-11-20 07:30:37.481967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.743 [2024-11-20 07:30:37.482599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.743 [2024-11-20 07:30:37.482637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.743 [2024-11-20 07:30:37.482652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.743 [2024-11-20 07:30:37.482901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.743 [2024-11-20 07:30:37.483126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.743 [2024-11-20 07:30:37.483135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.743 [2024-11-20 07:30:37.483142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.743 [2024-11-20 07:30:37.483151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.743 [2024-11-20 07:30:37.495943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.743 [2024-11-20 07:30:37.496616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.743 [2024-11-20 07:30:37.496654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:02.743 [2024-11-20 07:30:37.496665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:02.743 [2024-11-20 07:30:37.496912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:02.743 [2024-11-20 07:30:37.497136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.743 [2024-11-20 07:30:37.497146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.743 [2024-11-20 07:30:37.497153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.743 [2024-11-20 07:30:37.497161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.005 [2024-11-20 07:30:37.509762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.005 [2024-11-20 07:30:37.510429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-11-20 07:30:37.510467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.005 [2024-11-20 07:30:37.510478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.005 [2024-11-20 07:30:37.510716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.005 [2024-11-20 07:30:37.510949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.005 [2024-11-20 07:30:37.510959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.005 [2024-11-20 07:30:37.510967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.005 [2024-11-20 07:30:37.510975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.005 [2024-11-20 07:30:37.523569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.005 [2024-11-20 07:30:37.524157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-11-20 07:30:37.524195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.005 [2024-11-20 07:30:37.524206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.005 [2024-11-20 07:30:37.524444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.005 [2024-11-20 07:30:37.524672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.005 [2024-11-20 07:30:37.524681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.005 [2024-11-20 07:30:37.524689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.005 [2024-11-20 07:30:37.524697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.005 [2024-11-20 07:30:37.537491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.005 [2024-11-20 07:30:37.538204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.005 [2024-11-20 07:30:37.538241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.006 [2024-11-20 07:30:37.538253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.006 [2024-11-20 07:30:37.538491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.006 [2024-11-20 07:30:37.538715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.006 [2024-11-20 07:30:37.538724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.006 [2024-11-20 07:30:37.538731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.006 [2024-11-20 07:30:37.538740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.006 [2024-11-20 07:30:37.551324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.006 [2024-11-20 07:30:37.551875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-11-20 07:30:37.551894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.006 [2024-11-20 07:30:37.551902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.006 [2024-11-20 07:30:37.552122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.006 [2024-11-20 07:30:37.552342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.006 [2024-11-20 07:30:37.552350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.006 [2024-11-20 07:30:37.552357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.006 [2024-11-20 07:30:37.552364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.006 [2024-11-20 07:30:37.565140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.006 [2024-11-20 07:30:37.565690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-11-20 07:30:37.565728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.006 [2024-11-20 07:30:37.565738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.006 [2024-11-20 07:30:37.565987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.006 [2024-11-20 07:30:37.566212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.006 [2024-11-20 07:30:37.566220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.006 [2024-11-20 07:30:37.566228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.006 [2024-11-20 07:30:37.566240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.006 [2024-11-20 07:30:37.579033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.006 [2024-11-20 07:30:37.579690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-11-20 07:30:37.579727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.006 [2024-11-20 07:30:37.579738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.006 [2024-11-20 07:30:37.579985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.006 [2024-11-20 07:30:37.580209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.006 [2024-11-20 07:30:37.580218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.006 [2024-11-20 07:30:37.580226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.006 [2024-11-20 07:30:37.580234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.006 [2024-11-20 07:30:37.593027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.006 [2024-11-20 07:30:37.593697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-11-20 07:30:37.593734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.006 [2024-11-20 07:30:37.593745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.006 [2024-11-20 07:30:37.593993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.006 [2024-11-20 07:30:37.594218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.006 [2024-11-20 07:30:37.594226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.006 [2024-11-20 07:30:37.594234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.006 [2024-11-20 07:30:37.594242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.006 [2024-11-20 07:30:37.606834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.006 [2024-11-20 07:30:37.607490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-11-20 07:30:37.607528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.006 [2024-11-20 07:30:37.607540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.006 [2024-11-20 07:30:37.607779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.006 [2024-11-20 07:30:37.608012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.006 [2024-11-20 07:30:37.608021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.006 [2024-11-20 07:30:37.608029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.006 [2024-11-20 07:30:37.608037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.006 [2024-11-20 07:30:37.620831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.006 [2024-11-20 07:30:37.621513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-11-20 07:30:37.621551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.006 [2024-11-20 07:30:37.621561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.006 [2024-11-20 07:30:37.621800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.006 [2024-11-20 07:30:37.622033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.006 [2024-11-20 07:30:37.622043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.006 [2024-11-20 07:30:37.622051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.006 [2024-11-20 07:30:37.622059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.006 [2024-11-20 07:30:37.634650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.006 [2024-11-20 07:30:37.635346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-11-20 07:30:37.635384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.006 [2024-11-20 07:30:37.635396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.006 [2024-11-20 07:30:37.635636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.006 [2024-11-20 07:30:37.635860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.006 [2024-11-20 07:30:37.635878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.006 [2024-11-20 07:30:37.635886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.006 [2024-11-20 07:30:37.635893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.006 [2024-11-20 07:30:37.648485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.006 [2024-11-20 07:30:37.649170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-11-20 07:30:37.649207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.006 [2024-11-20 07:30:37.649218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.006 [2024-11-20 07:30:37.649456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.006 [2024-11-20 07:30:37.649680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.006 [2024-11-20 07:30:37.649689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.006 [2024-11-20 07:30:37.649697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.006 [2024-11-20 07:30:37.649705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.006 [2024-11-20 07:30:37.662293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.006 [2024-11-20 07:30:37.662980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.006 [2024-11-20 07:30:37.663017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.006 [2024-11-20 07:30:37.663034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.006 [2024-11-20 07:30:37.663275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.006 [2024-11-20 07:30:37.663498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.006 [2024-11-20 07:30:37.663507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.006 [2024-11-20 07:30:37.663515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.006 [2024-11-20 07:30:37.663523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.006 [2024-11-20 07:30:37.676111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.006 [2024-11-20 07:30:37.676637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-11-20 07:30:37.676675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.007 [2024-11-20 07:30:37.676686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.007 [2024-11-20 07:30:37.676933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.007 [2024-11-20 07:30:37.677157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.007 [2024-11-20 07:30:37.677165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.007 [2024-11-20 07:30:37.677173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.007 [2024-11-20 07:30:37.677181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.007 [2024-11-20 07:30:37.689971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.007 [2024-11-20 07:30:37.690644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-11-20 07:30:37.690681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.007 [2024-11-20 07:30:37.690692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.007 [2024-11-20 07:30:37.690942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.007 [2024-11-20 07:30:37.691167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.007 [2024-11-20 07:30:37.691175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.007 [2024-11-20 07:30:37.691183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.007 [2024-11-20 07:30:37.691191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.007 [2024-11-20 07:30:37.703780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.007 [2024-11-20 07:30:37.704201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-11-20 07:30:37.704222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.007 [2024-11-20 07:30:37.704230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.007 [2024-11-20 07:30:37.704450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.007 [2024-11-20 07:30:37.704675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.007 [2024-11-20 07:30:37.704683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.007 [2024-11-20 07:30:37.704690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.007 [2024-11-20 07:30:37.704696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.007 7306.25 IOPS, 28.54 MiB/s [2024-11-20T06:30:37.774Z] [2024-11-20 07:30:37.719352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.007 [2024-11-20 07:30:37.719943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-11-20 07:30:37.719981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.007 [2024-11-20 07:30:37.719993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.007 [2024-11-20 07:30:37.720235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.007 [2024-11-20 07:30:37.720458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.007 [2024-11-20 07:30:37.720467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.007 [2024-11-20 07:30:37.720475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.007 [2024-11-20 07:30:37.720483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.007 [2024-11-20 07:30:37.733208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.007 [2024-11-20 07:30:37.733837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-11-20 07:30:37.733882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.007 [2024-11-20 07:30:37.733894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.007 [2024-11-20 07:30:37.734132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.007 [2024-11-20 07:30:37.734355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.007 [2024-11-20 07:30:37.734364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.007 [2024-11-20 07:30:37.734372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.007 [2024-11-20 07:30:37.734380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.007 [2024-11-20 07:30:37.747173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.007 [2024-11-20 07:30:37.747710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-11-20 07:30:37.747729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.007 [2024-11-20 07:30:37.747737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.007 [2024-11-20 07:30:37.747963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.007 [2024-11-20 07:30:37.748183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.007 [2024-11-20 07:30:37.748191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.007 [2024-11-20 07:30:37.748202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.007 [2024-11-20 07:30:37.748209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.007 [2024-11-20 07:30:37.760994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.007 [2024-11-20 07:30:37.761521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.007 [2024-11-20 07:30:37.761539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.007 [2024-11-20 07:30:37.761546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.007 [2024-11-20 07:30:37.761765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.007 [2024-11-20 07:30:37.761991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.007 [2024-11-20 07:30:37.761999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.007 [2024-11-20 07:30:37.762007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.007 [2024-11-20 07:30:37.762013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.268 [2024-11-20 07:30:37.775000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.268 [2024-11-20 07:30:37.775526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-11-20 07:30:37.775542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.268 [2024-11-20 07:30:37.775550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.268 [2024-11-20 07:30:37.775768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.268 [2024-11-20 07:30:37.775993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.268 [2024-11-20 07:30:37.776002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.268 [2024-11-20 07:30:37.776009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.268 [2024-11-20 07:30:37.776016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.268 [2024-11-20 07:30:37.789028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.268 [2024-11-20 07:30:37.789692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-11-20 07:30:37.789729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.268 [2024-11-20 07:30:37.789740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.268 [2024-11-20 07:30:37.789987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.268 [2024-11-20 07:30:37.790212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.268 [2024-11-20 07:30:37.790221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.268 [2024-11-20 07:30:37.790230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.268 [2024-11-20 07:30:37.790238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.268 [2024-11-20 07:30:37.803038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.268 [2024-11-20 07:30:37.803697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.268 [2024-11-20 07:30:37.803735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.269 [2024-11-20 07:30:37.803746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.269 [2024-11-20 07:30:37.803993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.269 [2024-11-20 07:30:37.804218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.269 [2024-11-20 07:30:37.804226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.269 [2024-11-20 07:30:37.804234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.269 [2024-11-20 07:30:37.804242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.269 [2024-11-20 07:30:37.817237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.269 [2024-11-20 07:30:37.817870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-11-20 07:30:37.817908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.269 [2024-11-20 07:30:37.817920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.269 [2024-11-20 07:30:37.818162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.269 [2024-11-20 07:30:37.818385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.269 [2024-11-20 07:30:37.818394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.269 [2024-11-20 07:30:37.818402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.269 [2024-11-20 07:30:37.818410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.269 [2024-11-20 07:30:37.831204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.269 [2024-11-20 07:30:37.831645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-11-20 07:30:37.831665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.269 [2024-11-20 07:30:37.831673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.269 [2024-11-20 07:30:37.831900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.269 [2024-11-20 07:30:37.832121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.269 [2024-11-20 07:30:37.832129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.269 [2024-11-20 07:30:37.832136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.269 [2024-11-20 07:30:37.832142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.269 [2024-11-20 07:30:37.845136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.269 [2024-11-20 07:30:37.845706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-11-20 07:30:37.845722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.269 [2024-11-20 07:30:37.845734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.269 [2024-11-20 07:30:37.845959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.269 [2024-11-20 07:30:37.846179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.269 [2024-11-20 07:30:37.846187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.269 [2024-11-20 07:30:37.846195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.269 [2024-11-20 07:30:37.846202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.269 [2024-11-20 07:30:37.858982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.269 [2024-11-20 07:30:37.859641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-11-20 07:30:37.859678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.269 [2024-11-20 07:30:37.859689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.269 [2024-11-20 07:30:37.859937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.269 [2024-11-20 07:30:37.860162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.269 [2024-11-20 07:30:37.860170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.269 [2024-11-20 07:30:37.860178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.269 [2024-11-20 07:30:37.860187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.269 [2024-11-20 07:30:37.872980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.269 [2024-11-20 07:30:37.873520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-11-20 07:30:37.873540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.269 [2024-11-20 07:30:37.873547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.269 [2024-11-20 07:30:37.873767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.269 [2024-11-20 07:30:37.873993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.269 [2024-11-20 07:30:37.874002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.269 [2024-11-20 07:30:37.874010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.269 [2024-11-20 07:30:37.874016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.269 [2024-11-20 07:30:37.886804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.269 [2024-11-20 07:30:37.887504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-11-20 07:30:37.887542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.269 [2024-11-20 07:30:37.887555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.269 [2024-11-20 07:30:37.887797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.269 [2024-11-20 07:30:37.888037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.269 [2024-11-20 07:30:37.888047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.269 [2024-11-20 07:30:37.888055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.269 [2024-11-20 07:30:37.888064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.269 [2024-11-20 07:30:37.900659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.269 [2024-11-20 07:30:37.901222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-11-20 07:30:37.901242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.269 [2024-11-20 07:30:37.901249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.269 [2024-11-20 07:30:37.901469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.269 [2024-11-20 07:30:37.901688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.269 [2024-11-20 07:30:37.901696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.269 [2024-11-20 07:30:37.901704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.269 [2024-11-20 07:30:37.901710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.269 [2024-11-20 07:30:37.914504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.269 [2024-11-20 07:30:37.915168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-11-20 07:30:37.915205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.269 [2024-11-20 07:30:37.915216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.269 [2024-11-20 07:30:37.915454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.269 [2024-11-20 07:30:37.915678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.269 [2024-11-20 07:30:37.915687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.269 [2024-11-20 07:30:37.915695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.269 [2024-11-20 07:30:37.915703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.269 [2024-11-20 07:30:37.928497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.269 [2024-11-20 07:30:37.929164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.269 [2024-11-20 07:30:37.929202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.269 [2024-11-20 07:30:37.929213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.269 [2024-11-20 07:30:37.929451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.269 [2024-11-20 07:30:37.929675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.269 [2024-11-20 07:30:37.929684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.269 [2024-11-20 07:30:37.929696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.269 [2024-11-20 07:30:37.929705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.269 [2024-11-20 07:30:37.942499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.270 [2024-11-20 07:30:37.943161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-11-20 07:30:37.943199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.270 [2024-11-20 07:30:37.943209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.270 [2024-11-20 07:30:37.943448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.270 [2024-11-20 07:30:37.943671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.270 [2024-11-20 07:30:37.943680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.270 [2024-11-20 07:30:37.943688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.270 [2024-11-20 07:30:37.943696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.270 [2024-11-20 07:30:37.956496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.270 [2024-11-20 07:30:37.957086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-11-20 07:30:37.957123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.270 [2024-11-20 07:30:37.957134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.270 [2024-11-20 07:30:37.957373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.270 [2024-11-20 07:30:37.957596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.270 [2024-11-20 07:30:37.957605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.270 [2024-11-20 07:30:37.957613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.270 [2024-11-20 07:30:37.957620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.270 [2024-11-20 07:30:37.970413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.270 [2024-11-20 07:30:37.971004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-11-20 07:30:37.971025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.270 [2024-11-20 07:30:37.971032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.270 [2024-11-20 07:30:37.971252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.270 [2024-11-20 07:30:37.971472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.270 [2024-11-20 07:30:37.971480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.270 [2024-11-20 07:30:37.971487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.270 [2024-11-20 07:30:37.971494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.270 [2024-11-20 07:30:37.984272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.270 [2024-11-20 07:30:37.984848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-11-20 07:30:37.984869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.270 [2024-11-20 07:30:37.984877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.270 [2024-11-20 07:30:37.985095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.270 [2024-11-20 07:30:37.985314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.270 [2024-11-20 07:30:37.985322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.270 [2024-11-20 07:30:37.985329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.270 [2024-11-20 07:30:37.985336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.270 [2024-11-20 07:30:37.998112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.270 [2024-11-20 07:30:37.998785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-11-20 07:30:37.998822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.270 [2024-11-20 07:30:37.998833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.270 [2024-11-20 07:30:37.999081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.270 [2024-11-20 07:30:37.999306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.270 [2024-11-20 07:30:37.999314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.270 [2024-11-20 07:30:37.999322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.270 [2024-11-20 07:30:37.999330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.270 [2024-11-20 07:30:38.011928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.270 [2024-11-20 07:30:38.012603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-11-20 07:30:38.012641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.270 [2024-11-20 07:30:38.012652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.270 [2024-11-20 07:30:38.012899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.270 [2024-11-20 07:30:38.013124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.270 [2024-11-20 07:30:38.013132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.270 [2024-11-20 07:30:38.013140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.270 [2024-11-20 07:30:38.013148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.270 [2024-11-20 07:30:38.025738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.270 [2024-11-20 07:30:38.026301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.270 [2024-11-20 07:30:38.026339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.270 [2024-11-20 07:30:38.026356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.270 [2024-11-20 07:30:38.026596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.270 [2024-11-20 07:30:38.026820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.270 [2024-11-20 07:30:38.026829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.270 [2024-11-20 07:30:38.026836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.270 [2024-11-20 07:30:38.026844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.532 [2024-11-20 07:30:38.039644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.532 [2024-11-20 07:30:38.040269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.532 [2024-11-20 07:30:38.040307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.532 [2024-11-20 07:30:38.040318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.532 [2024-11-20 07:30:38.040557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.532 [2024-11-20 07:30:38.040780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.532 [2024-11-20 07:30:38.040789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.532 [2024-11-20 07:30:38.040797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.532 [2024-11-20 07:30:38.040805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.532 [2024-11-20 07:30:38.053601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.532 [2024-11-20 07:30:38.054277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.532 [2024-11-20 07:30:38.054314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.532 [2024-11-20 07:30:38.054325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.532 [2024-11-20 07:30:38.054564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.532 [2024-11-20 07:30:38.054788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.532 [2024-11-20 07:30:38.054796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.532 [2024-11-20 07:30:38.054804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.532 [2024-11-20 07:30:38.054813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.532 [2024-11-20 07:30:38.067612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.532 [2024-11-20 07:30:38.068172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.532 [2024-11-20 07:30:38.068210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.532 [2024-11-20 07:30:38.068222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.532 [2024-11-20 07:30:38.068462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.532 [2024-11-20 07:30:38.068690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.532 [2024-11-20 07:30:38.068699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.532 [2024-11-20 07:30:38.068707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.532 [2024-11-20 07:30:38.068715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.532 [2024-11-20 07:30:38.081511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.532 [2024-11-20 07:30:38.082186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.532 [2024-11-20 07:30:38.082223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.532 [2024-11-20 07:30:38.082234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.532 [2024-11-20 07:30:38.082474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.532 [2024-11-20 07:30:38.082697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.532 [2024-11-20 07:30:38.082705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.532 [2024-11-20 07:30:38.082713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.532 [2024-11-20 07:30:38.082721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.532 [2024-11-20 07:30:38.095515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.532 [2024-11-20 07:30:38.096156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.532 [2024-11-20 07:30:38.096194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.532 [2024-11-20 07:30:38.096205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.532 [2024-11-20 07:30:38.096444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.532 [2024-11-20 07:30:38.096668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.533 [2024-11-20 07:30:38.096677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.533 [2024-11-20 07:30:38.096685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.533 [2024-11-20 07:30:38.096693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.533 [2024-11-20 07:30:38.109507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.533 [2024-11-20 07:30:38.110159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.533 [2024-11-20 07:30:38.110196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.533 [2024-11-20 07:30:38.110207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.533 [2024-11-20 07:30:38.110446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.533 [2024-11-20 07:30:38.110670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.533 [2024-11-20 07:30:38.110679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.533 [2024-11-20 07:30:38.110691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.533 [2024-11-20 07:30:38.110699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.533 [2024-11-20 07:30:38.123502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.533 [2024-11-20 07:30:38.124123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.533 [2024-11-20 07:30:38.124142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.533 [2024-11-20 07:30:38.124150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.533 [2024-11-20 07:30:38.124371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.533 [2024-11-20 07:30:38.124590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.533 [2024-11-20 07:30:38.124598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.533 [2024-11-20 07:30:38.124605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.533 [2024-11-20 07:30:38.124612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.533 [2024-11-20 07:30:38.137401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.533 [2024-11-20 07:30:38.137986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.533 [2024-11-20 07:30:38.138024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.533 [2024-11-20 07:30:38.138036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.533 [2024-11-20 07:30:38.138279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.533 [2024-11-20 07:30:38.138502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.533 [2024-11-20 07:30:38.138511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.533 [2024-11-20 07:30:38.138519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.533 [2024-11-20 07:30:38.138527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.533 [2024-11-20 07:30:38.151328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.533 [2024-11-20 07:30:38.151973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.533 [2024-11-20 07:30:38.152011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.533 [2024-11-20 07:30:38.152023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.533 [2024-11-20 07:30:38.152263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.533 [2024-11-20 07:30:38.152487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.533 [2024-11-20 07:30:38.152496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.533 [2024-11-20 07:30:38.152503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.533 [2024-11-20 07:30:38.152511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.533 [2024-11-20 07:30:38.165303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.533 [2024-11-20 07:30:38.165974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.533 [2024-11-20 07:30:38.166012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.533 [2024-11-20 07:30:38.166024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.533 [2024-11-20 07:30:38.166264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.533 [2024-11-20 07:30:38.166488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.533 [2024-11-20 07:30:38.166497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.533 [2024-11-20 07:30:38.166504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.533 [2024-11-20 07:30:38.166512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.533 [2024-11-20 07:30:38.179180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.533 [2024-11-20 07:30:38.179869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.533 [2024-11-20 07:30:38.179906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.533 [2024-11-20 07:30:38.179917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.533 [2024-11-20 07:30:38.180156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.533 [2024-11-20 07:30:38.180379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.533 [2024-11-20 07:30:38.180387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.533 [2024-11-20 07:30:38.180395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.533 [2024-11-20 07:30:38.180403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.533 [2024-11-20 07:30:38.193200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.533 [2024-11-20 07:30:38.193900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.533 [2024-11-20 07:30:38.193938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.533 [2024-11-20 07:30:38.193950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.533 [2024-11-20 07:30:38.194189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.533 [2024-11-20 07:30:38.194413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.533 [2024-11-20 07:30:38.194421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.533 [2024-11-20 07:30:38.194429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.533 [2024-11-20 07:30:38.194437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.533 [2024-11-20 07:30:38.207034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.533 [2024-11-20 07:30:38.207696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.533 [2024-11-20 07:30:38.207734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.534 [2024-11-20 07:30:38.207750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.534 [2024-11-20 07:30:38.207998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.534 [2024-11-20 07:30:38.208222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.534 [2024-11-20 07:30:38.208230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.534 [2024-11-20 07:30:38.208238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.534 [2024-11-20 07:30:38.208246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.534 [2024-11-20 07:30:38.220841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.534 [2024-11-20 07:30:38.221396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.534 [2024-11-20 07:30:38.221415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.534 [2024-11-20 07:30:38.221423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.534 [2024-11-20 07:30:38.221643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.534 [2024-11-20 07:30:38.221869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.534 [2024-11-20 07:30:38.221878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.534 [2024-11-20 07:30:38.221885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.534 [2024-11-20 07:30:38.221892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.534 [2024-11-20 07:30:38.234665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.534 [2024-11-20 07:30:38.235232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.534 [2024-11-20 07:30:38.235249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.534 [2024-11-20 07:30:38.235257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.534 [2024-11-20 07:30:38.235476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.534 [2024-11-20 07:30:38.235695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.534 [2024-11-20 07:30:38.235702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.534 [2024-11-20 07:30:38.235710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.534 [2024-11-20 07:30:38.235716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.534 [2024-11-20 07:30:38.248475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.534 [2024-11-20 07:30:38.249001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.534 [2024-11-20 07:30:38.249018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.534 [2024-11-20 07:30:38.249025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.534 [2024-11-20 07:30:38.249245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.534 [2024-11-20 07:30:38.249468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.534 [2024-11-20 07:30:38.249476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.534 [2024-11-20 07:30:38.249483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.534 [2024-11-20 07:30:38.249489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.534 [2024-11-20 07:30:38.262269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.534 [2024-11-20 07:30:38.262924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.534 [2024-11-20 07:30:38.262962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.534 [2024-11-20 07:30:38.262973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.534 [2024-11-20 07:30:38.263212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.534 [2024-11-20 07:30:38.263435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.534 [2024-11-20 07:30:38.263444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.534 [2024-11-20 07:30:38.263452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.534 [2024-11-20 07:30:38.263460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.534 [2024-11-20 07:30:38.276262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.534 [2024-11-20 07:30:38.276945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.534 [2024-11-20 07:30:38.276983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.534 [2024-11-20 07:30:38.276995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.534 [2024-11-20 07:30:38.277237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.534 [2024-11-20 07:30:38.277460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.534 [2024-11-20 07:30:38.277469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.534 [2024-11-20 07:30:38.277477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.534 [2024-11-20 07:30:38.277485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.534 [2024-11-20 07:30:38.290071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.534 [2024-11-20 07:30:38.290744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.534 [2024-11-20 07:30:38.290782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.534 [2024-11-20 07:30:38.290793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.534 [2024-11-20 07:30:38.291040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.534 [2024-11-20 07:30:38.291265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.534 [2024-11-20 07:30:38.291273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.534 [2024-11-20 07:30:38.291285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.534 [2024-11-20 07:30:38.291293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.796 [2024-11-20 07:30:38.303886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.796 [2024-11-20 07:30:38.304481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.796 [2024-11-20 07:30:38.304518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.796 [2024-11-20 07:30:38.304529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.796 [2024-11-20 07:30:38.304767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.796 [2024-11-20 07:30:38.305000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.796 [2024-11-20 07:30:38.305011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.796 [2024-11-20 07:30:38.305019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.796 [2024-11-20 07:30:38.305027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.796 [2024-11-20 07:30:38.317833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.796 [2024-11-20 07:30:38.318401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.796 [2024-11-20 07:30:38.318438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.796 [2024-11-20 07:30:38.318450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.796 [2024-11-20 07:30:38.318690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.796 [2024-11-20 07:30:38.318921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.796 [2024-11-20 07:30:38.318931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.796 [2024-11-20 07:30:38.318939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.796 [2024-11-20 07:30:38.318947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.796 [2024-11-20 07:30:38.331740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.796 [2024-11-20 07:30:38.332439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.796 [2024-11-20 07:30:38.332477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.796 [2024-11-20 07:30:38.332488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.796 [2024-11-20 07:30:38.332727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.796 [2024-11-20 07:30:38.332959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.796 [2024-11-20 07:30:38.332969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.796 [2024-11-20 07:30:38.332977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.796 [2024-11-20 07:30:38.332985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.796 [2024-11-20 07:30:38.345577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.796 [2024-11-20 07:30:38.346226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.796 [2024-11-20 07:30:38.346264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.796 [2024-11-20 07:30:38.346275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.796 [2024-11-20 07:30:38.346514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.796 [2024-11-20 07:30:38.346737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.796 [2024-11-20 07:30:38.346746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.796 [2024-11-20 07:30:38.346754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.796 [2024-11-20 07:30:38.346762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.796 [2024-11-20 07:30:38.359556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.796 [2024-11-20 07:30:38.360141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.796 [2024-11-20 07:30:38.360161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.796 [2024-11-20 07:30:38.360169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.796 [2024-11-20 07:30:38.360389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.796 [2024-11-20 07:30:38.360609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.796 [2024-11-20 07:30:38.360617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.796 [2024-11-20 07:30:38.360624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.796 [2024-11-20 07:30:38.360631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.796 [2024-11-20 07:30:38.373465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.796 [2024-11-20 07:30:38.374032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.796 [2024-11-20 07:30:38.374050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.796 [2024-11-20 07:30:38.374057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.796 [2024-11-20 07:30:38.374277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.796 [2024-11-20 07:30:38.374496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.796 [2024-11-20 07:30:38.374505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.796 [2024-11-20 07:30:38.374512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.796 [2024-11-20 07:30:38.374519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.796 [2024-11-20 07:30:38.387322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.796 [2024-11-20 07:30:38.387856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.796 [2024-11-20 07:30:38.387877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.796 [2024-11-20 07:30:38.387890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.796 [2024-11-20 07:30:38.388109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.796 [2024-11-20 07:30:38.388329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.796 [2024-11-20 07:30:38.388337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.796 [2024-11-20 07:30:38.388344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.796 [2024-11-20 07:30:38.388350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.796 [2024-11-20 07:30:38.401153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.796 [2024-11-20 07:30:38.401821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.796 [2024-11-20 07:30:38.401859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.796 [2024-11-20 07:30:38.401880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.796 [2024-11-20 07:30:38.402119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.796 [2024-11-20 07:30:38.402343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.797 [2024-11-20 07:30:38.402352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.797 [2024-11-20 07:30:38.402359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.797 [2024-11-20 07:30:38.402368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.797 [2024-11-20 07:30:38.414973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.797 [2024-11-20 07:30:38.415574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.797 [2024-11-20 07:30:38.415612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.797 [2024-11-20 07:30:38.415623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.797 [2024-11-20 07:30:38.415871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.797 [2024-11-20 07:30:38.416107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.797 [2024-11-20 07:30:38.416117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.797 [2024-11-20 07:30:38.416125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.797 [2024-11-20 07:30:38.416133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.797 [2024-11-20 07:30:38.428812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.797 [2024-11-20 07:30:38.429372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.797 [2024-11-20 07:30:38.429392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.797 [2024-11-20 07:30:38.429400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.797 [2024-11-20 07:30:38.429620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.797 [2024-11-20 07:30:38.429845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.797 [2024-11-20 07:30:38.429853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.797 [2024-11-20 07:30:38.429861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.797 [2024-11-20 07:30:38.429874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.797 [2024-11-20 07:30:38.442667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.797 [2024-11-20 07:30:38.443301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.797 [2024-11-20 07:30:38.443339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.797 [2024-11-20 07:30:38.443350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.797 [2024-11-20 07:30:38.443589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.797 [2024-11-20 07:30:38.443812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.797 [2024-11-20 07:30:38.443821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.797 [2024-11-20 07:30:38.443829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.797 [2024-11-20 07:30:38.443837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.797 [2024-11-20 07:30:38.456662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.797 [2024-11-20 07:30:38.457358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.797 [2024-11-20 07:30:38.457395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.797 [2024-11-20 07:30:38.457406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.797 [2024-11-20 07:30:38.457645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.797 [2024-11-20 07:30:38.457877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.797 [2024-11-20 07:30:38.457888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.797 [2024-11-20 07:30:38.457896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.797 [2024-11-20 07:30:38.457905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.797 [2024-11-20 07:30:38.470516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.797 [2024-11-20 07:30:38.471188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.797 [2024-11-20 07:30:38.471226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.797 [2024-11-20 07:30:38.471239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.797 [2024-11-20 07:30:38.471478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.797 [2024-11-20 07:30:38.471702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.797 [2024-11-20 07:30:38.471710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.797 [2024-11-20 07:30:38.471723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.797 [2024-11-20 07:30:38.471731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.797 [2024-11-20 07:30:38.484333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.797 [2024-11-20 07:30:38.484895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.797 [2024-11-20 07:30:38.484915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.797 [2024-11-20 07:30:38.484923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.797 [2024-11-20 07:30:38.485143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.797 [2024-11-20 07:30:38.485363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.797 [2024-11-20 07:30:38.485370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.797 [2024-11-20 07:30:38.485377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.797 [2024-11-20 07:30:38.485384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.797 [2024-11-20 07:30:38.498199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.797 [2024-11-20 07:30:38.498879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.797 [2024-11-20 07:30:38.498918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.797 [2024-11-20 07:30:38.498930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.797 [2024-11-20 07:30:38.499172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.797 [2024-11-20 07:30:38.499396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.797 [2024-11-20 07:30:38.499405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.797 [2024-11-20 07:30:38.499413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.797 [2024-11-20 07:30:38.499420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.797 [2024-11-20 07:30:38.512025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.797 [2024-11-20 07:30:38.512607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.797 [2024-11-20 07:30:38.512626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.797 [2024-11-20 07:30:38.512633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.797 [2024-11-20 07:30:38.512853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.797 [2024-11-20 07:30:38.513081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.797 [2024-11-20 07:30:38.513091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.797 [2024-11-20 07:30:38.513098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.797 [2024-11-20 07:30:38.513105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.797 [2024-11-20 07:30:38.525896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.797 [2024-11-20 07:30:38.526556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.797 [2024-11-20 07:30:38.526594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.797 [2024-11-20 07:30:38.526605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.797 [2024-11-20 07:30:38.526843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.797 [2024-11-20 07:30:38.527074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.797 [2024-11-20 07:30:38.527083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.797 [2024-11-20 07:30:38.527091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.797 [2024-11-20 07:30:38.527100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.797 [2024-11-20 07:30:38.539897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.797 [2024-11-20 07:30:38.540479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.797 [2024-11-20 07:30:38.540499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.797 [2024-11-20 07:30:38.540507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.798 [2024-11-20 07:30:38.540727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.798 [2024-11-20 07:30:38.540954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.798 [2024-11-20 07:30:38.540963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.798 [2024-11-20 07:30:38.540970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.798 [2024-11-20 07:30:38.540977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:03.798 [2024-11-20 07:30:38.553781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:03.798 [2024-11-20 07:30:38.554322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:03.798 [2024-11-20 07:30:38.554340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:03.798 [2024-11-20 07:30:38.554348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:03.798 [2024-11-20 07:30:38.554567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:03.798 [2024-11-20 07:30:38.554787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:03.798 [2024-11-20 07:30:38.554796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:03.798 [2024-11-20 07:30:38.554804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:03.798 [2024-11-20 07:30:38.554810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.059 [2024-11-20 07:30:38.567625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.059 [2024-11-20 07:30:38.568164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.059 [2024-11-20 07:30:38.568181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.059 [2024-11-20 07:30:38.568198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.059 [2024-11-20 07:30:38.568417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.059 [2024-11-20 07:30:38.568637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.059 [2024-11-20 07:30:38.568645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.059 [2024-11-20 07:30:38.568652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.059 [2024-11-20 07:30:38.568659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.059 [2024-11-20 07:30:38.581459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.059 [2024-11-20 07:30:38.581996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.059 [2024-11-20 07:30:38.582013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.060 [2024-11-20 07:30:38.582021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.060 [2024-11-20 07:30:38.582240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.060 [2024-11-20 07:30:38.582459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.060 [2024-11-20 07:30:38.582467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.060 [2024-11-20 07:30:38.582474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.060 [2024-11-20 07:30:38.582481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.060 [2024-11-20 07:30:38.595278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.060 [2024-11-20 07:30:38.595837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.060 [2024-11-20 07:30:38.595853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.060 [2024-11-20 07:30:38.595860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.060 [2024-11-20 07:30:38.596086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.060 [2024-11-20 07:30:38.596305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.060 [2024-11-20 07:30:38.596313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.060 [2024-11-20 07:30:38.596320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.060 [2024-11-20 07:30:38.596326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.060 [2024-11-20 07:30:38.609139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.060 [2024-11-20 07:30:38.609705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.060 [2024-11-20 07:30:38.609721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.060 [2024-11-20 07:30:38.609729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.060 [2024-11-20 07:30:38.609953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.060 [2024-11-20 07:30:38.610177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.060 [2024-11-20 07:30:38.610185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.060 [2024-11-20 07:30:38.610192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.060 [2024-11-20 07:30:38.610199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.060 [2024-11-20 07:30:38.623005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.060 [2024-11-20 07:30:38.623572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.060 [2024-11-20 07:30:38.623588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.060 [2024-11-20 07:30:38.623595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.060 [2024-11-20 07:30:38.623815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.060 [2024-11-20 07:30:38.624039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.060 [2024-11-20 07:30:38.624047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.060 [2024-11-20 07:30:38.624055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.060 [2024-11-20 07:30:38.624061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.060 [2024-11-20 07:30:38.636866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.060 [2024-11-20 07:30:38.637517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.060 [2024-11-20 07:30:38.637555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.060 [2024-11-20 07:30:38.637567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.060 [2024-11-20 07:30:38.637808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.060 [2024-11-20 07:30:38.638040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.060 [2024-11-20 07:30:38.638051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.060 [2024-11-20 07:30:38.638059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.060 [2024-11-20 07:30:38.638067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.060 [2024-11-20 07:30:38.650669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.060 [2024-11-20 07:30:38.651239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.060 [2024-11-20 07:30:38.651260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.060 [2024-11-20 07:30:38.651268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.060 [2024-11-20 07:30:38.651487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.060 [2024-11-20 07:30:38.651707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.060 [2024-11-20 07:30:38.651715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.060 [2024-11-20 07:30:38.651727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.060 [2024-11-20 07:30:38.651734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.060 [2024-11-20 07:30:38.664530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.060 [2024-11-20 07:30:38.665202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.060 [2024-11-20 07:30:38.665239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.060 [2024-11-20 07:30:38.665251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.060 [2024-11-20 07:30:38.665489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.060 [2024-11-20 07:30:38.665712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.060 [2024-11-20 07:30:38.665721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.060 [2024-11-20 07:30:38.665729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.060 [2024-11-20 07:30:38.665737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.060 [2024-11-20 07:30:38.678532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.060 [2024-11-20 07:30:38.679104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.060 [2024-11-20 07:30:38.679124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.060 [2024-11-20 07:30:38.679132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.060 [2024-11-20 07:30:38.679351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.060 [2024-11-20 07:30:38.679571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.060 [2024-11-20 07:30:38.679578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.060 [2024-11-20 07:30:38.679585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.060 [2024-11-20 07:30:38.679592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.060 [2024-11-20 07:30:38.692422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.060 [2024-11-20 07:30:38.692970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.060 [2024-11-20 07:30:38.692989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.060 [2024-11-20 07:30:38.692996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.060 [2024-11-20 07:30:38.693215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.060 [2024-11-20 07:30:38.693434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.060 [2024-11-20 07:30:38.693442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.060 [2024-11-20 07:30:38.693450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.060 [2024-11-20 07:30:38.693456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.060 [2024-11-20 07:30:38.706265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.060 [2024-11-20 07:30:38.706834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.060 [2024-11-20 07:30:38.706850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.060 [2024-11-20 07:30:38.706858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.060 [2024-11-20 07:30:38.707084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.060 [2024-11-20 07:30:38.707304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.060 [2024-11-20 07:30:38.707312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.060 [2024-11-20 07:30:38.707319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.060 [2024-11-20 07:30:38.707326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.060 5845.00 IOPS, 22.83 MiB/s [2024-11-20T06:30:38.827Z] [2024-11-20 07:30:38.721790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.061 [2024-11-20 07:30:38.722365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.061 [2024-11-20 07:30:38.722383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.061 [2024-11-20 07:30:38.722390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.061 [2024-11-20 07:30:38.722609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.061 [2024-11-20 07:30:38.722828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.061 [2024-11-20 07:30:38.722836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.061 [2024-11-20 07:30:38.722843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.061 [2024-11-20 07:30:38.722850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.061 [2024-11-20 07:30:38.735651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.061 [2024-11-20 07:30:38.736086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.061 [2024-11-20 07:30:38.736105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.061 [2024-11-20 07:30:38.736112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.061 [2024-11-20 07:30:38.736332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.061 [2024-11-20 07:30:38.736552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.061 [2024-11-20 07:30:38.736560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.061 [2024-11-20 07:30:38.736567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.061 [2024-11-20 07:30:38.736574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.061 [2024-11-20 07:30:38.749584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.061 [2024-11-20 07:30:38.750151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.061 [2024-11-20 07:30:38.750168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.061 [2024-11-20 07:30:38.750181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.061 [2024-11-20 07:30:38.750401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.061 [2024-11-20 07:30:38.750621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.061 [2024-11-20 07:30:38.750628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.061 [2024-11-20 07:30:38.750635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.061 [2024-11-20 07:30:38.750642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.061 [2024-11-20 07:30:38.763442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.061 [2024-11-20 07:30:38.763972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.061 [2024-11-20 07:30:38.763989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.061 [2024-11-20 07:30:38.763997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.061 [2024-11-20 07:30:38.764216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.061 [2024-11-20 07:30:38.764435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.061 [2024-11-20 07:30:38.764442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.061 [2024-11-20 07:30:38.764449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.061 [2024-11-20 07:30:38.764456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.061 [2024-11-20 07:30:38.777252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.061 [2024-11-20 07:30:38.777774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.061 [2024-11-20 07:30:38.777790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.061 [2024-11-20 07:30:38.777797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.061 [2024-11-20 07:30:38.778020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.061 [2024-11-20 07:30:38.778240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.061 [2024-11-20 07:30:38.778249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.061 [2024-11-20 07:30:38.778256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.061 [2024-11-20 07:30:38.778262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.061 [2024-11-20 07:30:38.791051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.061 [2024-11-20 07:30:38.791617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.061 [2024-11-20 07:30:38.791633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.061 [2024-11-20 07:30:38.791640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.061 [2024-11-20 07:30:38.791858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.061 [2024-11-20 07:30:38.792088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.061 [2024-11-20 07:30:38.792096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.061 [2024-11-20 07:30:38.792103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.061 [2024-11-20 07:30:38.792110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.061 [2024-11-20 07:30:38.804923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.061 [2024-11-20 07:30:38.805582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.061 [2024-11-20 07:30:38.805619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.061 [2024-11-20 07:30:38.805630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.061 [2024-11-20 07:30:38.805878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.061 [2024-11-20 07:30:38.806103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.061 [2024-11-20 07:30:38.806111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.061 [2024-11-20 07:30:38.806119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.061 [2024-11-20 07:30:38.806127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.061 [2024-11-20 07:30:38.818924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.061 [2024-11-20 07:30:38.819463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.061 [2024-11-20 07:30:38.819482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.061 [2024-11-20 07:30:38.819490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.061 [2024-11-20 07:30:38.819710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.061 [2024-11-20 07:30:38.819937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.061 [2024-11-20 07:30:38.819946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.061 [2024-11-20 07:30:38.819953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.061 [2024-11-20 07:30:38.819960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.323 [2024-11-20 07:30:38.832764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.323 [2024-11-20 07:30:38.833414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.323 [2024-11-20 07:30:38.833451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.323 [2024-11-20 07:30:38.833462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.323 [2024-11-20 07:30:38.833701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.323 [2024-11-20 07:30:38.833936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.323 [2024-11-20 07:30:38.833946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.323 [2024-11-20 07:30:38.833959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.323 [2024-11-20 07:30:38.833967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.323 [2024-11-20 07:30:38.846771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.323 [2024-11-20 07:30:38.847356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.323 [2024-11-20 07:30:38.847376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.323 [2024-11-20 07:30:38.847384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.323 [2024-11-20 07:30:38.847604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.323 [2024-11-20 07:30:38.847823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.323 [2024-11-20 07:30:38.847831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.323 [2024-11-20 07:30:38.847839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.323 [2024-11-20 07:30:38.847845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.323 [2024-11-20 07:30:38.860639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.323 [2024-11-20 07:30:38.861179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.323 [2024-11-20 07:30:38.861196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.323 [2024-11-20 07:30:38.861204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.323 [2024-11-20 07:30:38.861423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.323 [2024-11-20 07:30:38.861642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.323 [2024-11-20 07:30:38.861650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.323 [2024-11-20 07:30:38.861657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.323 [2024-11-20 07:30:38.861663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.323 [2024-11-20 07:30:38.874461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.323 [2024-11-20 07:30:38.874970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.323 [2024-11-20 07:30:38.874987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.323 [2024-11-20 07:30:38.874994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.323 [2024-11-20 07:30:38.875214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.323 [2024-11-20 07:30:38.875432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.323 [2024-11-20 07:30:38.875440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.323 [2024-11-20 07:30:38.875447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.323 [2024-11-20 07:30:38.875454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.323 [2024-11-20 07:30:38.888468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.323 [2024-11-20 07:30:38.889078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.323 [2024-11-20 07:30:38.889117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.323 [2024-11-20 07:30:38.889127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.323 [2024-11-20 07:30:38.889366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.323 [2024-11-20 07:30:38.889590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.323 [2024-11-20 07:30:38.889598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.323 [2024-11-20 07:30:38.889606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.323 [2024-11-20 07:30:38.889614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.323 [2024-11-20 07:30:38.902433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.323 [2024-11-20 07:30:38.903007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.324 [2024-11-20 07:30:38.903045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.324 [2024-11-20 07:30:38.903058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.324 [2024-11-20 07:30:38.903299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.324 [2024-11-20 07:30:38.903523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.324 [2024-11-20 07:30:38.903532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.324 [2024-11-20 07:30:38.903540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.324 [2024-11-20 07:30:38.903548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.324 [2024-11-20 07:30:38.916351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.324 [2024-11-20 07:30:38.916973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.324 [2024-11-20 07:30:38.917010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.324 [2024-11-20 07:30:38.917022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.324 [2024-11-20 07:30:38.917264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.324 [2024-11-20 07:30:38.917488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.324 [2024-11-20 07:30:38.917496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.324 [2024-11-20 07:30:38.917505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.324 [2024-11-20 07:30:38.917513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.324 [2024-11-20 07:30:38.930321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.324 [2024-11-20 07:30:38.930885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.324 [2024-11-20 07:30:38.930904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.324 [2024-11-20 07:30:38.930917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.324 [2024-11-20 07:30:38.931138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.324 [2024-11-20 07:30:38.931358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.324 [2024-11-20 07:30:38.931366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.324 [2024-11-20 07:30:38.931373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.324 [2024-11-20 07:30:38.931380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.324 [2024-11-20 07:30:38.944199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.324 [2024-11-20 07:30:38.944752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.324 [2024-11-20 07:30:38.944789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.324 [2024-11-20 07:30:38.944801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.324 [2024-11-20 07:30:38.945052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.324 [2024-11-20 07:30:38.945276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.324 [2024-11-20 07:30:38.945285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.324 [2024-11-20 07:30:38.945293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.324 [2024-11-20 07:30:38.945301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.324 [2024-11-20 07:30:38.958094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.324 [2024-11-20 07:30:38.958772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.324 [2024-11-20 07:30:38.958810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.324 [2024-11-20 07:30:38.958822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.324 [2024-11-20 07:30:38.959074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.324 [2024-11-20 07:30:38.959300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.324 [2024-11-20 07:30:38.959309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.324 [2024-11-20 07:30:38.959317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.324 [2024-11-20 07:30:38.959325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.324 [2024-11-20 07:30:38.971912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.324 [2024-11-20 07:30:38.972568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.324 [2024-11-20 07:30:38.972606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.324 [2024-11-20 07:30:38.972617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.324 [2024-11-20 07:30:38.972855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.324 [2024-11-20 07:30:38.973092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.324 [2024-11-20 07:30:38.973101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.324 [2024-11-20 07:30:38.973109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.324 [2024-11-20 07:30:38.973117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.324 [2024-11-20 07:30:38.985908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.324 [2024-11-20 07:30:38.986483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.324 [2024-11-20 07:30:38.986502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.324 [2024-11-20 07:30:38.986510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.324 [2024-11-20 07:30:38.986730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.324 [2024-11-20 07:30:38.986955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.324 [2024-11-20 07:30:38.986964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.324 [2024-11-20 07:30:38.986972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.324 [2024-11-20 07:30:38.986978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.324 [2024-11-20 07:30:38.999747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.324 [2024-11-20 07:30:39.000413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.324 [2024-11-20 07:30:39.000450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.324 [2024-11-20 07:30:39.000462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.324 [2024-11-20 07:30:39.000700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.324 [2024-11-20 07:30:39.000940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.324 [2024-11-20 07:30:39.000950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.324 [2024-11-20 07:30:39.000958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.324 [2024-11-20 07:30:39.000966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.324 [2024-11-20 07:30:39.013594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.324 [2024-11-20 07:30:39.014157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.324 [2024-11-20 07:30:39.014195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.324 [2024-11-20 07:30:39.014207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.324 [2024-11-20 07:30:39.014450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.324 [2024-11-20 07:30:39.014674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.324 [2024-11-20 07:30:39.014682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.324 [2024-11-20 07:30:39.014694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.324 [2024-11-20 07:30:39.014702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.324 [2024-11-20 07:30:39.027509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.324 [2024-11-20 07:30:39.028144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.324 [2024-11-20 07:30:39.028182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.324 [2024-11-20 07:30:39.028195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.324 [2024-11-20 07:30:39.028435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.324 [2024-11-20 07:30:39.028658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.324 [2024-11-20 07:30:39.028667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.324 [2024-11-20 07:30:39.028675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.324 [2024-11-20 07:30:39.028683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.324 [2024-11-20 07:30:39.041480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.325 [2024-11-20 07:30:39.042183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.325 [2024-11-20 07:30:39.042221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.325 [2024-11-20 07:30:39.042232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.325 [2024-11-20 07:30:39.042471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.325 [2024-11-20 07:30:39.042694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.325 [2024-11-20 07:30:39.042704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.325 [2024-11-20 07:30:39.042711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.325 [2024-11-20 07:30:39.042719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.325 [2024-11-20 07:30:39.055315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.325 [2024-11-20 07:30:39.055975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.325 [2024-11-20 07:30:39.056012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.325 [2024-11-20 07:30:39.056025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.325 [2024-11-20 07:30:39.056267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.325 [2024-11-20 07:30:39.056491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.325 [2024-11-20 07:30:39.056500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.325 [2024-11-20 07:30:39.056507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.325 [2024-11-20 07:30:39.056516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.325 [2024-11-20 07:30:39.069316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.325 [2024-11-20 07:30:39.069952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.325 [2024-11-20 07:30:39.069990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.325 [2024-11-20 07:30:39.070001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.325 [2024-11-20 07:30:39.070240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.325 [2024-11-20 07:30:39.070463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.325 [2024-11-20 07:30:39.070472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.325 [2024-11-20 07:30:39.070480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.325 [2024-11-20 07:30:39.070488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.325 [2024-11-20 07:30:39.083289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.325 [2024-11-20 07:30:39.083838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.325 [2024-11-20 07:30:39.083857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.325 [2024-11-20 07:30:39.083872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.325 [2024-11-20 07:30:39.084092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.325 [2024-11-20 07:30:39.084311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.325 [2024-11-20 07:30:39.084319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.325 [2024-11-20 07:30:39.084326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.325 [2024-11-20 07:30:39.084333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.586 [2024-11-20 07:30:39.097110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.586 [2024-11-20 07:30:39.097770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.586 [2024-11-20 07:30:39.097808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.586 [2024-11-20 07:30:39.097820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.586 [2024-11-20 07:30:39.098070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.586 [2024-11-20 07:30:39.098294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.586 [2024-11-20 07:30:39.098303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.586 [2024-11-20 07:30:39.098311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.586 [2024-11-20 07:30:39.098319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.587 [2024-11-20 07:30:39.111111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.587 [2024-11-20 07:30:39.111575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-11-20 07:30:39.111594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.587 [2024-11-20 07:30:39.111606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.587 [2024-11-20 07:30:39.111826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.587 [2024-11-20 07:30:39.112051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.587 [2024-11-20 07:30:39.112060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.587 [2024-11-20 07:30:39.112067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.587 [2024-11-20 07:30:39.112074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.587 [2024-11-20 07:30:39.125074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.587 [2024-11-20 07:30:39.125678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-11-20 07:30:39.125715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.587 [2024-11-20 07:30:39.125727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.587 [2024-11-20 07:30:39.125975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.587 [2024-11-20 07:30:39.126200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.587 [2024-11-20 07:30:39.126208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.587 [2024-11-20 07:30:39.126216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.587 [2024-11-20 07:30:39.126224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.587 [2024-11-20 07:30:39.139029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.587 [2024-11-20 07:30:39.139636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-11-20 07:30:39.139674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.587 [2024-11-20 07:30:39.139685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.587 [2024-11-20 07:30:39.139933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.587 [2024-11-20 07:30:39.140158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.587 [2024-11-20 07:30:39.140166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.587 [2024-11-20 07:30:39.140174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.587 [2024-11-20 07:30:39.140182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.587 [2024-11-20 07:30:39.152978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.587 [2024-11-20 07:30:39.153617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-11-20 07:30:39.153655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.587 [2024-11-20 07:30:39.153666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.587 [2024-11-20 07:30:39.153914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.587 [2024-11-20 07:30:39.154144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.587 [2024-11-20 07:30:39.154153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.587 [2024-11-20 07:30:39.154160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.587 [2024-11-20 07:30:39.154168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.587 [2024-11-20 07:30:39.166956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.587 [2024-11-20 07:30:39.167541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-11-20 07:30:39.167560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.587 [2024-11-20 07:30:39.167568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.587 [2024-11-20 07:30:39.167788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.587 [2024-11-20 07:30:39.168014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.587 [2024-11-20 07:30:39.168023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.587 [2024-11-20 07:30:39.168030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.587 [2024-11-20 07:30:39.168037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.587 [2024-11-20 07:30:39.180806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.587 [2024-11-20 07:30:39.181376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-11-20 07:30:39.181393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.587 [2024-11-20 07:30:39.181400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.587 [2024-11-20 07:30:39.181619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.587 [2024-11-20 07:30:39.181838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.587 [2024-11-20 07:30:39.181846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.587 [2024-11-20 07:30:39.181853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.587 [2024-11-20 07:30:39.181859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.587 [2024-11-20 07:30:39.194636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.587 [2024-11-20 07:30:39.195260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-11-20 07:30:39.195297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.587 [2024-11-20 07:30:39.195308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.587 [2024-11-20 07:30:39.195547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.587 [2024-11-20 07:30:39.195771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.587 [2024-11-20 07:30:39.195779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.587 [2024-11-20 07:30:39.195791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.587 [2024-11-20 07:30:39.195800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.587 [2024-11-20 07:30:39.208605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.587 [2024-11-20 07:30:39.209198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-11-20 07:30:39.209218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.587 [2024-11-20 07:30:39.209226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.587 [2024-11-20 07:30:39.209446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.587 [2024-11-20 07:30:39.209665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.587 [2024-11-20 07:30:39.209673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.587 [2024-11-20 07:30:39.209680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.587 [2024-11-20 07:30:39.209687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.587 [2024-11-20 07:30:39.222471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.587 [2024-11-20 07:30:39.223143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-11-20 07:30:39.223180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.587 [2024-11-20 07:30:39.223191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.587 [2024-11-20 07:30:39.223430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.587 [2024-11-20 07:30:39.223654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.587 [2024-11-20 07:30:39.223663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.587 [2024-11-20 07:30:39.223670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.587 [2024-11-20 07:30:39.223678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.587 [2024-11-20 07:30:39.236472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.587 [2024-11-20 07:30:39.237164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.587 [2024-11-20 07:30:39.237201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.587 [2024-11-20 07:30:39.237212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.587 [2024-11-20 07:30:39.237451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.587 [2024-11-20 07:30:39.237674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.587 [2024-11-20 07:30:39.237684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.588 [2024-11-20 07:30:39.237692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.588 [2024-11-20 07:30:39.237700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.588 [2024-11-20 07:30:39.250285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.588 [2024-11-20 07:30:39.250926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-11-20 07:30:39.250964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.588 [2024-11-20 07:30:39.250974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.588 [2024-11-20 07:30:39.251213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.588 [2024-11-20 07:30:39.251437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.588 [2024-11-20 07:30:39.251445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.588 [2024-11-20 07:30:39.251454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.588 [2024-11-20 07:30:39.251461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.588 [2024-11-20 07:30:39.264255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.588 [2024-11-20 07:30:39.264940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-11-20 07:30:39.264978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.588 [2024-11-20 07:30:39.264990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.588 [2024-11-20 07:30:39.265230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.588 [2024-11-20 07:30:39.265453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.588 [2024-11-20 07:30:39.265462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.588 [2024-11-20 07:30:39.265470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.588 [2024-11-20 07:30:39.265478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.588 [2024-11-20 07:30:39.278063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.588 [2024-11-20 07:30:39.278613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-11-20 07:30:39.278633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.588 [2024-11-20 07:30:39.278641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.588 [2024-11-20 07:30:39.278867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.588 [2024-11-20 07:30:39.279088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.588 [2024-11-20 07:30:39.279096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.588 [2024-11-20 07:30:39.279103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.588 [2024-11-20 07:30:39.279110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.588 [2024-11-20 07:30:39.291883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.588 [2024-11-20 07:30:39.292447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-11-20 07:30:39.292464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.588 [2024-11-20 07:30:39.292476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.588 [2024-11-20 07:30:39.292695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.588 [2024-11-20 07:30:39.292919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.588 [2024-11-20 07:30:39.292928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.588 [2024-11-20 07:30:39.292935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.588 [2024-11-20 07:30:39.292941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.588 [2024-11-20 07:30:39.305717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.588 [2024-11-20 07:30:39.306243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-11-20 07:30:39.306260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.588 [2024-11-20 07:30:39.306267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.588 [2024-11-20 07:30:39.306486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.588 [2024-11-20 07:30:39.306705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.588 [2024-11-20 07:30:39.306713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.588 [2024-11-20 07:30:39.306720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.588 [2024-11-20 07:30:39.306727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.588 [2024-11-20 07:30:39.319715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.588 [2024-11-20 07:30:39.320279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-11-20 07:30:39.320296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.588 [2024-11-20 07:30:39.320303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.588 [2024-11-20 07:30:39.320522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.588 [2024-11-20 07:30:39.320740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.588 [2024-11-20 07:30:39.320748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.588 [2024-11-20 07:30:39.320755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.588 [2024-11-20 07:30:39.320762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.588 [2024-11-20 07:30:39.333541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.588 [2024-11-20 07:30:39.334188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-11-20 07:30:39.334225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.588 [2024-11-20 07:30:39.334236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.588 [2024-11-20 07:30:39.334474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.588 [2024-11-20 07:30:39.334703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.588 [2024-11-20 07:30:39.334712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.588 [2024-11-20 07:30:39.334719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.588 [2024-11-20 07:30:39.334727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.588 [2024-11-20 07:30:39.347525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.588 [2024-11-20 07:30:39.348087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.588 [2024-11-20 07:30:39.348124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.588 [2024-11-20 07:30:39.348137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.588 [2024-11-20 07:30:39.348377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.588 [2024-11-20 07:30:39.348600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.588 [2024-11-20 07:30:39.348609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.588 [2024-11-20 07:30:39.348617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.588 [2024-11-20 07:30:39.348625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1479522 Killed "${NVMF_APP[@]}" "$@" 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.850 [2024-11-20 07:30:39.361447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.850 [2024-11-20 07:30:39.362013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-11-20 07:30:39.362052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.850 [2024-11-20 07:30:39.362064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.850 [2024-11-20 07:30:39.362306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.850 [2024-11-20 07:30:39.362529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.850 [2024-11-20 07:30:39.362538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.850 [2024-11-20 07:30:39.362546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.850 [2024-11-20 07:30:39.362555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1481229 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1481229 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1481229 ']' 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:04.850 07:30:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.850 [2024-11-20 07:30:39.375358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.850 [2024-11-20 07:30:39.375923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-11-20 07:30:39.375960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.850 [2024-11-20 07:30:39.375973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.850 [2024-11-20 07:30:39.376215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.850 [2024-11-20 07:30:39.376439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.850 [2024-11-20 07:30:39.376448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.850 [2024-11-20 07:30:39.376457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.850 [2024-11-20 07:30:39.376465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.850 [2024-11-20 07:30:39.389263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.850 [2024-11-20 07:30:39.389722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.850 [2024-11-20 07:30:39.389741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.850 [2024-11-20 07:30:39.389748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.850 [2024-11-20 07:30:39.389974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.850 [2024-11-20 07:30:39.390194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.851 [2024-11-20 07:30:39.390202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.851 [2024-11-20 07:30:39.390210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.851 [2024-11-20 07:30:39.390216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.851 [2024-11-20 07:30:39.403221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.851 [2024-11-20 07:30:39.403776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-11-20 07:30:39.403813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.851 [2024-11-20 07:30:39.403825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.851 [2024-11-20 07:30:39.404075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.851 [2024-11-20 07:30:39.404300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.851 [2024-11-20 07:30:39.404313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.851 [2024-11-20 07:30:39.404321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.851 [2024-11-20 07:30:39.404329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.851 [2024-11-20 07:30:39.417127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.851 [2024-11-20 07:30:39.417681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-11-20 07:30:39.417719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.851 [2024-11-20 07:30:39.417730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.851 [2024-11-20 07:30:39.417976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.851 [2024-11-20 07:30:39.418210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.851 [2024-11-20 07:30:39.418219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.851 [2024-11-20 07:30:39.418227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.851 [2024-11-20 07:30:39.418235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.851 [2024-11-20 07:30:39.422636] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:30:04.851 [2024-11-20 07:30:39.422682] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.851 [2024-11-20 07:30:39.431036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.851 [2024-11-20 07:30:39.431709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-11-20 07:30:39.431746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.851 [2024-11-20 07:30:39.431757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.851 [2024-11-20 07:30:39.432004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.851 [2024-11-20 07:30:39.432228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.851 [2024-11-20 07:30:39.432237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.851 [2024-11-20 07:30:39.432245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.851 [2024-11-20 07:30:39.432253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.851 [2024-11-20 07:30:39.445033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.851 [2024-11-20 07:30:39.445584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-11-20 07:30:39.445622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.851 [2024-11-20 07:30:39.445634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.851 [2024-11-20 07:30:39.445884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.851 [2024-11-20 07:30:39.446109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.851 [2024-11-20 07:30:39.446122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.851 [2024-11-20 07:30:39.446131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.851 [2024-11-20 07:30:39.446139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.851 [2024-11-20 07:30:39.459015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.851 [2024-11-20 07:30:39.459680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-11-20 07:30:39.459718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.851 [2024-11-20 07:30:39.459730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.851 [2024-11-20 07:30:39.459977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.851 [2024-11-20 07:30:39.460201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.851 [2024-11-20 07:30:39.460210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.851 [2024-11-20 07:30:39.460218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.851 [2024-11-20 07:30:39.460227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.851 [2024-11-20 07:30:39.473028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.851 [2024-11-20 07:30:39.473618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-11-20 07:30:39.473637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.851 [2024-11-20 07:30:39.473645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.851 [2024-11-20 07:30:39.473871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.851 [2024-11-20 07:30:39.474092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.851 [2024-11-20 07:30:39.474100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.851 [2024-11-20 07:30:39.474107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.851 [2024-11-20 07:30:39.474114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.851 [2024-11-20 07:30:39.486900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.851 [2024-11-20 07:30:39.487478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-11-20 07:30:39.487495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.851 [2024-11-20 07:30:39.487503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.851 [2024-11-20 07:30:39.487721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.851 [2024-11-20 07:30:39.487946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.851 [2024-11-20 07:30:39.487954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.851 [2024-11-20 07:30:39.487961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.851 [2024-11-20 07:30:39.487972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.851 [2024-11-20 07:30:39.500757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.851 [2024-11-20 07:30:39.501165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-11-20 07:30:39.501184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.851 [2024-11-20 07:30:39.501191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.851 [2024-11-20 07:30:39.501411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.851 [2024-11-20 07:30:39.501631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.851 [2024-11-20 07:30:39.501639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.851 [2024-11-20 07:30:39.501646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.851 [2024-11-20 07:30:39.501653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.851 [2024-11-20 07:30:39.514650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.851 [2024-11-20 07:30:39.515359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.851 [2024-11-20 07:30:39.515397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.851 [2024-11-20 07:30:39.515408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.851 [2024-11-20 07:30:39.515647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.851 [2024-11-20 07:30:39.515877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.851 [2024-11-20 07:30:39.515887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.851 [2024-11-20 07:30:39.515895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.851 [2024-11-20 07:30:39.515903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.851 [2024-11-20 07:30:39.521319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:04.851 [2024-11-20 07:30:39.528510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.851 [2024-11-20 07:30:39.529257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-11-20 07:30:39.529295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.852 [2024-11-20 07:30:39.529307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.852 [2024-11-20 07:30:39.529547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.852 [2024-11-20 07:30:39.529771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.852 [2024-11-20 07:30:39.529780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.852 [2024-11-20 07:30:39.529788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.852 [2024-11-20 07:30:39.529797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.852 [2024-11-20 07:30:39.542386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.852 [2024-11-20 07:30:39.543006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-11-20 07:30:39.543044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.852 [2024-11-20 07:30:39.543056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.852 [2024-11-20 07:30:39.543299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.852 [2024-11-20 07:30:39.543523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.852 [2024-11-20 07:30:39.543531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.852 [2024-11-20 07:30:39.543539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.852 [2024-11-20 07:30:39.543547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.852 [2024-11-20 07:30:39.550429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.852 [2024-11-20 07:30:39.550450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.852 [2024-11-20 07:30:39.550457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.852 [2024-11-20 07:30:39.550462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.852 [2024-11-20 07:30:39.550467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.852 [2024-11-20 07:30:39.551587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.852 [2024-11-20 07:30:39.551742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.852 [2024-11-20 07:30:39.551744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:04.852 [2024-11-20 07:30:39.556344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.852 [2024-11-20 07:30:39.556953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-11-20 07:30:39.556991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.852 [2024-11-20 07:30:39.557003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.852 [2024-11-20 07:30:39.557242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.852 [2024-11-20 07:30:39.557467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.852 [2024-11-20 07:30:39.557476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.852 [2024-11-20 07:30:39.557484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.852 [2024-11-20 07:30:39.557492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.852 [2024-11-20 07:30:39.570301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.852 [2024-11-20 07:30:39.570968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-11-20 07:30:39.571006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.852 [2024-11-20 07:30:39.571018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.852 [2024-11-20 07:30:39.571257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.852 [2024-11-20 07:30:39.571480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.852 [2024-11-20 07:30:39.571495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.852 [2024-11-20 07:30:39.571503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.852 [2024-11-20 07:30:39.571511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.852 [2024-11-20 07:30:39.584313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.852 [2024-11-20 07:30:39.585053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-11-20 07:30:39.585090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.852 [2024-11-20 07:30:39.585102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.852 [2024-11-20 07:30:39.585341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.852 [2024-11-20 07:30:39.585565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.852 [2024-11-20 07:30:39.585574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.852 [2024-11-20 07:30:39.585582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.852 [2024-11-20 07:30:39.585591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.852 [2024-11-20 07:30:39.598182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.852 [2024-11-20 07:30:39.598895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-11-20 07:30:39.598933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.852 [2024-11-20 07:30:39.598945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.852 [2024-11-20 07:30:39.599188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.852 [2024-11-20 07:30:39.599412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.852 [2024-11-20 07:30:39.599421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.852 [2024-11-20 07:30:39.599429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.852 [2024-11-20 07:30:39.599437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:04.852 [2024-11-20 07:30:39.612042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:04.852 [2024-11-20 07:30:39.612726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.852 [2024-11-20 07:30:39.612763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:04.852 [2024-11-20 07:30:39.612775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:04.852 [2024-11-20 07:30:39.613024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:04.852 [2024-11-20 07:30:39.613249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:04.852 [2024-11-20 07:30:39.613259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:04.852 [2024-11-20 07:30:39.613267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:04.852 [2024-11-20 07:30:39.613280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.114 [2024-11-20 07:30:39.625878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.114 [2024-11-20 07:30:39.626452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.114 [2024-11-20 07:30:39.626472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.114 [2024-11-20 07:30:39.626480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.114 [2024-11-20 07:30:39.626700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.114 [2024-11-20 07:30:39.626926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.114 [2024-11-20 07:30:39.626935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.114 [2024-11-20 07:30:39.626944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.114 [2024-11-20 07:30:39.626951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.115 [2024-11-20 07:30:39.639734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.115 [2024-11-20 07:30:39.640257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.115 [2024-11-20 07:30:39.640294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.115 [2024-11-20 07:30:39.640307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.115 [2024-11-20 07:30:39.640550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.115 [2024-11-20 07:30:39.640773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.115 [2024-11-20 07:30:39.640782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.115 [2024-11-20 07:30:39.640791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.115 [2024-11-20 07:30:39.640799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.115 [2024-11-20 07:30:39.653655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.115 [2024-11-20 07:30:39.654323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.115 [2024-11-20 07:30:39.654361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.115 [2024-11-20 07:30:39.654373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.115 [2024-11-20 07:30:39.654612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.115 [2024-11-20 07:30:39.654835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.115 [2024-11-20 07:30:39.654844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.115 [2024-11-20 07:30:39.654853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.115 [2024-11-20 07:30:39.654870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.115 [2024-11-20 07:30:39.667664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.115 [2024-11-20 07:30:39.668370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.115 [2024-11-20 07:30:39.668408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.115 [2024-11-20 07:30:39.668420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.115 [2024-11-20 07:30:39.668659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.115 [2024-11-20 07:30:39.668891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.115 [2024-11-20 07:30:39.668901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.115 [2024-11-20 07:30:39.668909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.115 [2024-11-20 07:30:39.668917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.115 [2024-11-20 07:30:39.681501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.115 [2024-11-20 07:30:39.682166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.115 [2024-11-20 07:30:39.682203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.115 [2024-11-20 07:30:39.682215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.115 [2024-11-20 07:30:39.682453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.115 [2024-11-20 07:30:39.682677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.115 [2024-11-20 07:30:39.682686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.115 [2024-11-20 07:30:39.682694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.115 [2024-11-20 07:30:39.682702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.115 [2024-11-20 07:30:39.695485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.115 [2024-11-20 07:30:39.696090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.115 [2024-11-20 07:30:39.696128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.115 [2024-11-20 07:30:39.696140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.115 [2024-11-20 07:30:39.696380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.115 [2024-11-20 07:30:39.696604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.115 [2024-11-20 07:30:39.696613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.115 [2024-11-20 07:30:39.696621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.115 [2024-11-20 07:30:39.696629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.115 [2024-11-20 07:30:39.709584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.115 [2024-11-20 07:30:39.710256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.115 [2024-11-20 07:30:39.710294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.115 [2024-11-20 07:30:39.710305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.115 [2024-11-20 07:30:39.710549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.115 [2024-11-20 07:30:39.710772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.115 [2024-11-20 07:30:39.710781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.115 [2024-11-20 07:30:39.710789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.115 [2024-11-20 07:30:39.710797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.115 4870.83 IOPS, 19.03 MiB/s [2024-11-20T06:30:39.882Z] [2024-11-20 07:30:39.725262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.115 [2024-11-20 07:30:39.725837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.115 [2024-11-20 07:30:39.725883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.115 [2024-11-20 07:30:39.725896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.115 [2024-11-20 07:30:39.726138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.115 [2024-11-20 07:30:39.726361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.115 [2024-11-20 07:30:39.726370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.115 [2024-11-20 07:30:39.726378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.115 [2024-11-20 07:30:39.726386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.115 [2024-11-20 07:30:39.739175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.115 [2024-11-20 07:30:39.739852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.115 [2024-11-20 07:30:39.739896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.115 [2024-11-20 07:30:39.739907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.115 [2024-11-20 07:30:39.740146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.115 [2024-11-20 07:30:39.740370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.115 [2024-11-20 07:30:39.740379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.115 [2024-11-20 07:30:39.740387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.115 [2024-11-20 07:30:39.740394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.115 [2024-11-20 07:30:39.753185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.115 [2024-11-20 07:30:39.753811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.115 [2024-11-20 07:30:39.753849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.115 [2024-11-20 07:30:39.753860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.115 [2024-11-20 07:30:39.754108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.115 [2024-11-20 07:30:39.754331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.115 [2024-11-20 07:30:39.754344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.115 [2024-11-20 07:30:39.754352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.115 [2024-11-20 07:30:39.754360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.115 [2024-11-20 07:30:39.767149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.115 [2024-11-20 07:30:39.767749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.115 [2024-11-20 07:30:39.767768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.115 [2024-11-20 07:30:39.767775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.115 [2024-11-20 07:30:39.768001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.115 [2024-11-20 07:30:39.768221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.116 [2024-11-20 07:30:39.768229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.116 [2024-11-20 07:30:39.768236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.116 [2024-11-20 07:30:39.768243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.116 [2024-11-20 07:30:39.781058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.116 [2024-11-20 07:30:39.781736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.116 [2024-11-20 07:30:39.781773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.116 [2024-11-20 07:30:39.781784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.116 [2024-11-20 07:30:39.782032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.116 [2024-11-20 07:30:39.782257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.116 [2024-11-20 07:30:39.782266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.116 [2024-11-20 07:30:39.782274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.116 [2024-11-20 07:30:39.782283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.116 [2024-11-20 07:30:39.794865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.116 [2024-11-20 07:30:39.795559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.116 [2024-11-20 07:30:39.795597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.116 [2024-11-20 07:30:39.795608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.116 [2024-11-20 07:30:39.795847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.116 [2024-11-20 07:30:39.796079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.116 [2024-11-20 07:30:39.796089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.116 [2024-11-20 07:30:39.796097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.116 [2024-11-20 07:30:39.796109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.116 [2024-11-20 07:30:39.808702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.116 [2024-11-20 07:30:39.809377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.116 [2024-11-20 07:30:39.809414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.116 [2024-11-20 07:30:39.809426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.116 [2024-11-20 07:30:39.809665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.116 [2024-11-20 07:30:39.809897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.116 [2024-11-20 07:30:39.809907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.116 [2024-11-20 07:30:39.809914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.116 [2024-11-20 07:30:39.809922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.116 [2024-11-20 07:30:39.822713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.116 [2024-11-20 07:30:39.823232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.116 [2024-11-20 07:30:39.823270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.116 [2024-11-20 07:30:39.823281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.116 [2024-11-20 07:30:39.823520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.116 [2024-11-20 07:30:39.823744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.116 [2024-11-20 07:30:39.823753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.116 [2024-11-20 07:30:39.823761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.116 [2024-11-20 07:30:39.823768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.116 [2024-11-20 07:30:39.836567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.116 [2024-11-20 07:30:39.837144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.116 [2024-11-20 07:30:39.837182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.116 [2024-11-20 07:30:39.837194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.116 [2024-11-20 07:30:39.837434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.116 [2024-11-20 07:30:39.837658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.116 [2024-11-20 07:30:39.837667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.116 [2024-11-20 07:30:39.837675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.116 [2024-11-20 07:30:39.837683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.116 [2024-11-20 07:30:39.850480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.116 [2024-11-20 07:30:39.851176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.116 [2024-11-20 07:30:39.851213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.116 [2024-11-20 07:30:39.851225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.116 [2024-11-20 07:30:39.851463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.116 [2024-11-20 07:30:39.851686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.116 [2024-11-20 07:30:39.851695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.116 [2024-11-20 07:30:39.851702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.116 [2024-11-20 07:30:39.851711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.116 [2024-11-20 07:30:39.864301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.116 [2024-11-20 07:30:39.864858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.116 [2024-11-20 07:30:39.864904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.116 [2024-11-20 07:30:39.864915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.116 [2024-11-20 07:30:39.865154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.116 [2024-11-20 07:30:39.865378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.116 [2024-11-20 07:30:39.865387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.116 [2024-11-20 07:30:39.865395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.116 [2024-11-20 07:30:39.865403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.379 [2024-11-20 07:30:39.878192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.379 [2024-11-20 07:30:39.878765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.379 [2024-11-20 07:30:39.878784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.379 [2024-11-20 07:30:39.878792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.379 [2024-11-20 07:30:39.879019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.379 [2024-11-20 07:30:39.879239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.379 [2024-11-20 07:30:39.879247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.379 [2024-11-20 07:30:39.879255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.379 [2024-11-20 07:30:39.879262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.379 [2024-11-20 07:30:39.892044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.379 [2024-11-20 07:30:39.892627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.379 [2024-11-20 07:30:39.892645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.379 [2024-11-20 07:30:39.892652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.379 [2024-11-20 07:30:39.892886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.379 [2024-11-20 07:30:39.893108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.379 [2024-11-20 07:30:39.893116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.379 [2024-11-20 07:30:39.893123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.379 [2024-11-20 07:30:39.893130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.379 [2024-11-20 07:30:39.905920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.379 [2024-11-20 07:30:39.906562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.379 [2024-11-20 07:30:39.906600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.379 [2024-11-20 07:30:39.906611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.379 [2024-11-20 07:30:39.906850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.379 [2024-11-20 07:30:39.907082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.379 [2024-11-20 07:30:39.907092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.379 [2024-11-20 07:30:39.907100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.379 [2024-11-20 07:30:39.907109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.379 [2024-11-20 07:30:39.919903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.379 [2024-11-20 07:30:39.920555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.379 [2024-11-20 07:30:39.920592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.380 [2024-11-20 07:30:39.920605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.380 [2024-11-20 07:30:39.920847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.380 [2024-11-20 07:30:39.921079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.380 [2024-11-20 07:30:39.921089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.380 [2024-11-20 07:30:39.921096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.380 [2024-11-20 07:30:39.921105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.380 [2024-11-20 07:30:39.933913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.380 [2024-11-20 07:30:39.934419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.380 [2024-11-20 07:30:39.934457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.380 [2024-11-20 07:30:39.934468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.380 [2024-11-20 07:30:39.934707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.380 [2024-11-20 07:30:39.934941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.380 [2024-11-20 07:30:39.934959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.380 [2024-11-20 07:30:39.934967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.380 [2024-11-20 07:30:39.934974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.380 [2024-11-20 07:30:39.947754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.380 [2024-11-20 07:30:39.948347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.380 [2024-11-20 07:30:39.948367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.380 [2024-11-20 07:30:39.948375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.380 [2024-11-20 07:30:39.948594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.380 [2024-11-20 07:30:39.948814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.380 [2024-11-20 07:30:39.948822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.380 [2024-11-20 07:30:39.948829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.380 [2024-11-20 07:30:39.948836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.380 [2024-11-20 07:30:39.961622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.380 [2024-11-20 07:30:39.962188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.380 [2024-11-20 07:30:39.962206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.380 [2024-11-20 07:30:39.962213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.380 [2024-11-20 07:30:39.962432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.380 [2024-11-20 07:30:39.962652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.380 [2024-11-20 07:30:39.962660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.380 [2024-11-20 07:30:39.962667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.380 [2024-11-20 07:30:39.962673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.380 [2024-11-20 07:30:39.975446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.380 [2024-11-20 07:30:39.976152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.380 [2024-11-20 07:30:39.976190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.380 [2024-11-20 07:30:39.976201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.380 [2024-11-20 07:30:39.976440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.380 [2024-11-20 07:30:39.976663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.380 [2024-11-20 07:30:39.976672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.380 [2024-11-20 07:30:39.976680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.380 [2024-11-20 07:30:39.976692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.380 [2024-11-20 07:30:39.989283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.380 [2024-11-20 07:30:39.989967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.380 [2024-11-20 07:30:39.990005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.380 [2024-11-20 07:30:39.990018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.380 [2024-11-20 07:30:39.990260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.380 [2024-11-20 07:30:39.990484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.380 [2024-11-20 07:30:39.990493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.380 [2024-11-20 07:30:39.990500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.380 [2024-11-20 07:30:39.990509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.380 [2024-11-20 07:30:40.003622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.380 [2024-11-20 07:30:40.004222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.380 [2024-11-20 07:30:40.004261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.380 [2024-11-20 07:30:40.004272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.380 [2024-11-20 07:30:40.004511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.380 [2024-11-20 07:30:40.004734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.380 [2024-11-20 07:30:40.004744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.380 [2024-11-20 07:30:40.004752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.380 [2024-11-20 07:30:40.004760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.380 [2024-11-20 07:30:40.017559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.380 [2024-11-20 07:30:40.018136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.380 [2024-11-20 07:30:40.018156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.380 [2024-11-20 07:30:40.018164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.380 [2024-11-20 07:30:40.018384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.380 [2024-11-20 07:30:40.018604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.380 [2024-11-20 07:30:40.018613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.380 [2024-11-20 07:30:40.018620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.380 [2024-11-20 07:30:40.018628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.380 [2024-11-20 07:30:40.031433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.380 [2024-11-20 07:30:40.032140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.380 [2024-11-20 07:30:40.032179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.380 [2024-11-20 07:30:40.032191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.380 [2024-11-20 07:30:40.032431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.380 [2024-11-20 07:30:40.032654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.380 [2024-11-20 07:30:40.032664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.381 [2024-11-20 07:30:40.032672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.381 [2024-11-20 07:30:40.032681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.381 [2024-11-20 07:30:40.045281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.381 [2024-11-20 07:30:40.045831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.381 [2024-11-20 07:30:40.045878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.381 [2024-11-20 07:30:40.045890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.381 [2024-11-20 07:30:40.046128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.381 [2024-11-20 07:30:40.046352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.381 [2024-11-20 07:30:40.046362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.381 [2024-11-20 07:30:40.046370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.381 [2024-11-20 07:30:40.046379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.381 [2024-11-20 07:30:40.059175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.381 [2024-11-20 07:30:40.059878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.381 [2024-11-20 07:30:40.059916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.381 [2024-11-20 07:30:40.059929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.381 [2024-11-20 07:30:40.060169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.381 [2024-11-20 07:30:40.060392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.381 [2024-11-20 07:30:40.060401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.381 [2024-11-20 07:30:40.060409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.381 [2024-11-20 07:30:40.060417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.381 [2024-11-20 07:30:40.073000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.381 [2024-11-20 07:30:40.073559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.381 [2024-11-20 07:30:40.073578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.381 [2024-11-20 07:30:40.073586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.381 [2024-11-20 07:30:40.073811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.381 [2024-11-20 07:30:40.074036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.381 [2024-11-20 07:30:40.074045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.381 [2024-11-20 07:30:40.074052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.381 [2024-11-20 07:30:40.074059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.381 [2024-11-20 07:30:40.086846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.381 [2024-11-20 07:30:40.087519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.381 [2024-11-20 07:30:40.087557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.381 [2024-11-20 07:30:40.087568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.381 [2024-11-20 07:30:40.087807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.381 [2024-11-20 07:30:40.088038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.381 [2024-11-20 07:30:40.088048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.381 [2024-11-20 07:30:40.088056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.381 [2024-11-20 07:30:40.088064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.381 [2024-11-20 07:30:40.100648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.381 [2024-11-20 07:30:40.101267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.381 [2024-11-20 07:30:40.101288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.381 [2024-11-20 07:30:40.101295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.381 [2024-11-20 07:30:40.101515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.381 [2024-11-20 07:30:40.101735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.381 [2024-11-20 07:30:40.101743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.381 [2024-11-20 07:30:40.101750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.381 [2024-11-20 07:30:40.101757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.381 [2024-11-20 07:30:40.114541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.381 [2024-11-20 07:30:40.115206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.381 [2024-11-20 07:30:40.115244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.381 [2024-11-20 07:30:40.115256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.381 [2024-11-20 07:30:40.115495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.381 [2024-11-20 07:30:40.115719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.381 [2024-11-20 07:30:40.115732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.381 [2024-11-20 07:30:40.115740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.381 [2024-11-20 07:30:40.115749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.381 [2024-11-20 07:30:40.128357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.381 [2024-11-20 07:30:40.128914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.381 [2024-11-20 07:30:40.128952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.381 [2024-11-20 07:30:40.128965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.381 [2024-11-20 07:30:40.129207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.381 [2024-11-20 07:30:40.129431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.381 [2024-11-20 07:30:40.129441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.381 [2024-11-20 07:30:40.129449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.381 [2024-11-20 07:30:40.129457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.644 [2024-11-20 07:30:40.142264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.644 [2024-11-20 07:30:40.142845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.644 [2024-11-20 07:30:40.142871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.644 [2024-11-20 07:30:40.142881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.644 [2024-11-20 07:30:40.143100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.644 [2024-11-20 07:30:40.143320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.644 [2024-11-20 07:30:40.143328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.644 [2024-11-20 07:30:40.143335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.644 [2024-11-20 07:30:40.143342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.644 [2024-11-20 07:30:40.156284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.644 [2024-11-20 07:30:40.156815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.644 [2024-11-20 07:30:40.156854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.644 [2024-11-20 07:30:40.156874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.644 [2024-11-20 07:30:40.157113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.644 [2024-11-20 07:30:40.157337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.644 [2024-11-20 07:30:40.157346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.644 [2024-11-20 07:30:40.157355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.644 [2024-11-20 07:30:40.157367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.644 [2024-11-20 07:30:40.170158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.644 [2024-11-20 07:30:40.170837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.644 [2024-11-20 07:30:40.170882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.644 [2024-11-20 07:30:40.170894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.644 [2024-11-20 07:30:40.171132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.644 [2024-11-20 07:30:40.171356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.644 [2024-11-20 07:30:40.171365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.644 [2024-11-20 07:30:40.171374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.644 [2024-11-20 07:30:40.171382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.644 [2024-11-20 07:30:40.183968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.644 [2024-11-20 07:30:40.184619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.644 [2024-11-20 07:30:40.184657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.644 [2024-11-20 07:30:40.184669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.644 [2024-11-20 07:30:40.184917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.644 [2024-11-20 07:30:40.185141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.644 [2024-11-20 07:30:40.185150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.644 [2024-11-20 07:30:40.185159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.644 [2024-11-20 07:30:40.185168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.644 [2024-11-20 07:30:40.197970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.644 [2024-11-20 07:30:40.198520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.644 [2024-11-20 07:30:40.198558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.644 [2024-11-20 07:30:40.198569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.644 [2024-11-20 07:30:40.198808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.644 [2024-11-20 07:30:40.199041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.644 [2024-11-20 07:30:40.199052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.644 [2024-11-20 07:30:40.199060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.644 [2024-11-20 07:30:40.199068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.644 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:05.644 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:30:05.644 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.644 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:05.644 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.645 [2024-11-20 07:30:40.211895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.645 [2024-11-20 07:30:40.212479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.645 [2024-11-20 07:30:40.212499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.645 [2024-11-20 07:30:40.212507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.645 [2024-11-20 07:30:40.212727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.645 [2024-11-20 07:30:40.212952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.645 [2024-11-20 07:30:40.212962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.645 [2024-11-20 07:30:40.212970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.645 [2024-11-20 07:30:40.212976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.645 [2024-11-20 07:30:40.225841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.645 [2024-11-20 07:30:40.226276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.645 [2024-11-20 07:30:40.226294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.645 [2024-11-20 07:30:40.226302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.645 [2024-11-20 07:30:40.226521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.645 [2024-11-20 07:30:40.226740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.645 [2024-11-20 07:30:40.226747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.645 [2024-11-20 07:30:40.226755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.645 [2024-11-20 07:30:40.226761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.645 [2024-11-20 07:30:40.239752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.645 [2024-11-20 07:30:40.240294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.645 [2024-11-20 07:30:40.240311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.645 [2024-11-20 07:30:40.240320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.645 [2024-11-20 07:30:40.240539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.645 [2024-11-20 07:30:40.240758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.645 [2024-11-20 07:30:40.240766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.645 [2024-11-20 07:30:40.240774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.645 [2024-11-20 07:30:40.240780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.645 [2024-11-20 07:30:40.248756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.645 [2024-11-20 07:30:40.253569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.645 [2024-11-20 07:30:40.254258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.645 [2024-11-20 07:30:40.254296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.645 [2024-11-20 07:30:40.254307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.645 [2024-11-20 07:30:40.254547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.645 [2024-11-20 07:30:40.254770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.645 [2024-11-20 07:30:40.254779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.645 [2024-11-20 07:30:40.254787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.645 [2024-11-20 07:30:40.254795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.645 [2024-11-20 07:30:40.267388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.645 [2024-11-20 07:30:40.268091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.645 [2024-11-20 07:30:40.268129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.645 [2024-11-20 07:30:40.268140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.645 [2024-11-20 07:30:40.268379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.645 [2024-11-20 07:30:40.268603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.645 [2024-11-20 07:30:40.268612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.645 [2024-11-20 07:30:40.268619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.645 [2024-11-20 07:30:40.268627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.645 Malloc0 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.645 [2024-11-20 07:30:40.281225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.645 [2024-11-20 07:30:40.281875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.645 [2024-11-20 07:30:40.281919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.645 [2024-11-20 07:30:40.281932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.645 [2024-11-20 07:30:40.282175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.645 [2024-11-20 07:30:40.282398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.645 [2024-11-20 07:30:40.282407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.645 [2024-11-20 07:30:40.282415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.645 [2024-11-20 07:30:40.282424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.645 [2024-11-20 07:30:40.295217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.645 [2024-11-20 07:30:40.295884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.645 [2024-11-20 07:30:40.295921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.645 [2024-11-20 07:30:40.295934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.645 [2024-11-20 07:30:40.296176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.645 [2024-11-20 07:30:40.296399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.645 [2024-11-20 07:30:40.296408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.645 [2024-11-20 07:30:40.296416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.645 [2024-11-20 07:30:40.296424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.645 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.645 [2024-11-20 07:30:40.309026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.645 [2024-11-20 07:30:40.309557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:05.645 [2024-11-20 07:30:40.309575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d206a0 with addr=10.0.0.2, port=4420 00:30:05.645 [2024-11-20 07:30:40.309583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d206a0 is same with the state(6) to be set 00:30:05.645 [2024-11-20 07:30:40.309803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d206a0 (9): Bad file descriptor 00:30:05.645 [2024-11-20 07:30:40.310030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:05.646 [2024-11-20 07:30:40.310038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:05.646 [2024-11-20 07:30:40.310050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:05.646 [2024-11-20 07:30:40.310057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:05.646 [2024-11-20 07:30:40.311745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.646 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.646 07:30:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1480030 00:30:05.646 [2024-11-20 07:30:40.322841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:05.907 [2024-11-20 07:30:40.441329] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:07.112 4641.29 IOPS, 18.13 MiB/s [2024-11-20T06:30:42.823Z] 5467.50 IOPS, 21.36 MiB/s [2024-11-20T06:30:43.764Z] 6143.22 IOPS, 24.00 MiB/s [2024-11-20T06:30:45.150Z] 6678.60 IOPS, 26.09 MiB/s [2024-11-20T06:30:46.092Z] 7090.09 IOPS, 27.70 MiB/s [2024-11-20T06:30:47.033Z] 7460.75 IOPS, 29.14 MiB/s [2024-11-20T06:30:47.974Z] 7749.08 IOPS, 30.27 MiB/s [2024-11-20T06:30:48.915Z] 8021.50 IOPS, 31.33 MiB/s [2024-11-20T06:30:48.915Z] 8235.13 IOPS, 32.17 MiB/s 00:30:14.148 Latency(us) 00:30:14.148 [2024-11-20T06:30:48.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.148 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:14.148 Verification LBA range: start 0x0 length 0x4000 00:30:14.148 Nvme1n1 : 15.05 8207.72 32.06 9919.79 0.00 7016.84 798.72 42379.95 00:30:14.148 [2024-11-20T06:30:48.915Z] =================================================================================================================== 00:30:14.148 [2024-11-20T06:30:48.915Z] Total : 8207.72 32.06 9919.79 0.00 7016.84 798.72 42379.95 00:30:14.148 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:14.148 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:14.148 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.148 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:14.409 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.410 rmmod nvme_tcp 00:30:14.410 rmmod nvme_fabrics 00:30:14.410 rmmod nvme_keyring 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1481229 ']' 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1481229 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 1481229 ']' 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 1481229 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:14.410 07:30:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1481229 00:30:14.410 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:14.410 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:14.410 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1481229' 00:30:14.410 killing process with pid 1481229 00:30:14.410 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 1481229 00:30:14.410 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 1481229 00:30:14.670 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:14.670 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:14.670 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:14.670 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:14.670 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:14.670 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:14.670 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:14.670 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:14.670 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:14.670 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.670 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.670 07:30:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.584 07:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:16.584 00:30:16.584 real 0m29.261s 00:30:16.584 user 1m3.243s 00:30:16.584 sys 0m8.410s 00:30:16.584 07:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:16.584 07:30:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:16.584 ************************************ 00:30:16.584 END TEST nvmf_bdevperf 00:30:16.584 ************************************ 00:30:16.585 07:30:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:16.585 07:30:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:16.585 07:30:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:16.585 07:30:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.585 ************************************ 00:30:16.585 START TEST nvmf_target_disconnect 00:30:16.585 ************************************ 00:30:16.585 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:16.847 * Looking for test storage... 00:30:16.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:16.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.847 --rc genhtml_branch_coverage=1 00:30:16.847 --rc genhtml_function_coverage=1 00:30:16.847 --rc genhtml_legend=1 00:30:16.847 --rc geninfo_all_blocks=1 00:30:16.847 --rc geninfo_unexecuted_blocks=1 00:30:16.847 00:30:16.847 ' 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:16.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.847 --rc genhtml_branch_coverage=1 00:30:16.847 --rc genhtml_function_coverage=1 00:30:16.847 --rc genhtml_legend=1 00:30:16.847 --rc geninfo_all_blocks=1 00:30:16.847 --rc geninfo_unexecuted_blocks=1 00:30:16.847 00:30:16.847 ' 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:16.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.847 --rc genhtml_branch_coverage=1 00:30:16.847 --rc genhtml_function_coverage=1 00:30:16.847 --rc genhtml_legend=1 00:30:16.847 --rc geninfo_all_blocks=1 00:30:16.847 --rc geninfo_unexecuted_blocks=1 00:30:16.847 00:30:16.847 ' 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:16.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.847 --rc genhtml_branch_coverage=1 00:30:16.847 --rc genhtml_function_coverage=1 00:30:16.847 --rc genhtml_legend=1 00:30:16.847 --rc geninfo_all_blocks=1 00:30:16.847 --rc geninfo_unexecuted_blocks=1 00:30:16.847 00:30:16.847 ' 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.847 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:16.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:16.848 07:30:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:25.058 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:25.058 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:25.058 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:25.059 Found net devices under 0000:31:00.0: cvl_0_0 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:25.059 Found net devices under 0000:31:00.1: cvl_0_1 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.059 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.319 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.319 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:25.319 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:25.319 07:30:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:25.319 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:25.319 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:25.319 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:25.319 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:25.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.711 ms 00:30:25.319 00:30:25.319 --- 10.0.0.2 ping statistics --- 00:30:25.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.319 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:30:25.319 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:25.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:30:25.319 00:30:25.319 --- 10.0.0.1 ping statistics --- 00:30:25.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.319 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:30:25.319 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.319 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:25.319 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:25.319 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:25.320 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:25.320 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:25.320 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:25.320 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:25.320 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:25.580 ************************************ 00:30:25.580 START TEST nvmf_target_disconnect_tc1 00:30:25.580 ************************************ 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:25.580 [2024-11-20 07:31:00.289560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.580 [2024-11-20 07:31:00.289623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215dcf0 with addr=10.0.0.2, port=4420 00:30:25.580 [2024-11-20 07:31:00.289656] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:25.580 [2024-11-20 07:31:00.289667] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:25.580 [2024-11-20 07:31:00.289675] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:25.580 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:25.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:25.580 Initializing NVMe Controllers 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:25.580 00:30:25.580 real 0m0.138s 00:30:25.580 user 0m0.060s 00:30:25.580 sys 0m0.077s 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:25.580 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:25.580 ************************************ 00:30:25.580 END TEST nvmf_target_disconnect_tc1 00:30:25.580 ************************************ 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:25.842 ************************************ 00:30:25.842 START TEST nvmf_target_disconnect_tc2 00:30:25.842 ************************************ 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1487828 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1487828 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1487828 ']' 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:25.842 07:31:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:25.842 [2024-11-20 07:31:00.459039] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:30:25.842 [2024-11-20 07:31:00.459103] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:25.842 [2024-11-20 07:31:00.568928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:26.103 [2024-11-20 07:31:00.621817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.103 [2024-11-20 07:31:00.621882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.103 [2024-11-20 07:31:00.621892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.103 [2024-11-20 07:31:00.621899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.103 [2024-11-20 07:31:00.621906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.103 [2024-11-20 07:31:00.623984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:26.103 [2024-11-20 07:31:00.624126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:26.103 [2024-11-20 07:31:00.624288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:26.103 [2024-11-20 07:31:00.624288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.676 Malloc0 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.676 [2024-11-20 07:31:01.365265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.676 [2024-11-20 07:31:01.405694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.676 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.677 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1487993 00:30:26.677 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:26.677 07:31:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:29.248 07:31:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1487828 00:30:29.248 07:31:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:29.248 Read completed with error (sct=0, sc=8) 00:30:29.248 starting I/O failed 00:30:29.248 Read completed with error (sct=0, sc=8) 00:30:29.248 starting I/O failed 00:30:29.248 Read completed with error (sct=0, sc=8) 00:30:29.248 starting I/O failed 00:30:29.248 Read completed with error (sct=0, sc=8) 00:30:29.248 starting I/O failed 00:30:29.248 Read completed with error (sct=0, sc=8) 00:30:29.248 starting I/O failed 00:30:29.248 Read completed with error (sct=0, sc=8) 00:30:29.248 starting I/O failed 00:30:29.248 Read completed with error (sct=0, sc=8) 00:30:29.248 starting I/O failed 00:30:29.248 Read completed with error (sct=0, sc=8) 00:30:29.248 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Write completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Write completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Write completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Write completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Write completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Write completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Write completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Write completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Read completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Write completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 Write completed with error (sct=0, sc=8) 00:30:29.249 starting I/O failed 00:30:29.249 [2024-11-20 07:31:03.439890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.249 [2024-11-20 07:31:03.440407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.440456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.440653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.440665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.440902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.440923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.441375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.441388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.441736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.441748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.442151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.442188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.442516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.442531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.442873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.442899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.443262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.443275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.443554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.443566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.444066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.444104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.444405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.444419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.444748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.444760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.445090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.445103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.445400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.445413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.445592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.445605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.445943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.445955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.446292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.446304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.446612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.446625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.446912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.446925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.447236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.447248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.447538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.447550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.447874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.447886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.448200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.448212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.448562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.448574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.448769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.448781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.449131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.449144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.449452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.249 [2024-11-20 07:31:03.449465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.249 qpair failed and we were unable to recover it. 00:30:29.249 [2024-11-20 07:31:03.449677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.449692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.449990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.450002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.450337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.450349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.450634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.450647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.450975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.450988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.451319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.451331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.451664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.451676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.451995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.452007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.452350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.452362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.452694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.452706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.452885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.452898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.453263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.453275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.453566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.453578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.453878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.453895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.454237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.454248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.454398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.454408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.454737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.454748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.455085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.455097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.455429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.455441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.455729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.455740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.455946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.455958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.456265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.456276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.456439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.456451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.456774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.456786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.457113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.457125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.457466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.457477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.457812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.457823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.458103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.458115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.458413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.458425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.458711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.458722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.459105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.459117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.459454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.459466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.459761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.459772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.460107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.460120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.460410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.460421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.460765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.460776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.461063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.250 [2024-11-20 07:31:03.461074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.250 qpair failed and we were unable to recover it. 00:30:29.250 [2024-11-20 07:31:03.461369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.461380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.461690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.461702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.462009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.462022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.462215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.462228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.462406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.462418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.462698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.462710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.463002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.463014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.463389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.463401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.463686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.463707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.464062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.464073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.464386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.464398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.464571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.464583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.464874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.464886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.465179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.465190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.465489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.465500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.465783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.465794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.466135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.466147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.466510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.466523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.466831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.466845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.467131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.467145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.467451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.467465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.467839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.467853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.468179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.468193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.468491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.468504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.468696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.468711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.469036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.469050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.469320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.469333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.469659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.469673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.470012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.470026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.470341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.470355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.470660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.470675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.470972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.470986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.471280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.471294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.471571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.471585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.471874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.471888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.472210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.472224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.472563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.251 [2024-11-20 07:31:03.472576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.251 qpair failed and we were unable to recover it. 00:30:29.251 [2024-11-20 07:31:03.472877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.472891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.473073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.473088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.473302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.473315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.473654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.473667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.473990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.474004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.474165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.474180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.474467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.474484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.474814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.474827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.475146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.475161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.475441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.475455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.475791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.475805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.476102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.476118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.476338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.476352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.476664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.476682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.477006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.477025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.477306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.477323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.477632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.477650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.478021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.478040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.478367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.478386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.478685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.478703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.479009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.479028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.479339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.479356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.479654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.479671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.479882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.479900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.480162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.480180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.480476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.480495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.480802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.480819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.481125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.481145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.481469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.481487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.481781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.481799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.482027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.482045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.482342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.482361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.482706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.482724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.483033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.483053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.483240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.483260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.483417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.483437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.252 [2024-11-20 07:31:03.483767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.252 [2024-11-20 07:31:03.483786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.252 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.484077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.484096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.484413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.484430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.484744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.484764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.485081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.485100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.485397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.485416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.485615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.485635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.485943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.485962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.486257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.486274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.486603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.486620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.486937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.486966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.487319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.487341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.487702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.487725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.488042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.488066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.488281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.488306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.488661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.488684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.489012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.489035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.489361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.489384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.489589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.489612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.489945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.489968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.490178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.490202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.490556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.490578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.490808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.490829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.491155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.491178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.491501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.491523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.491846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.491882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.492237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.492260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.492612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.492635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.492962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.492985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.493137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.493161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.493519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.493541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.493869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.253 [2024-11-20 07:31:03.493893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.253 qpair failed and we were unable to recover it. 00:30:29.253 [2024-11-20 07:31:03.494212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.494235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.494560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.494583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.494948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.494972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.495199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.495223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.495520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.495542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.495879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.495903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.496247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.496271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.496626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.496648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.497008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.497032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.497341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.497364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.497733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.497762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.498096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.498128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.498479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.498509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.498804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.498834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.499187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.499219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.499571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.499600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.499947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.499978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.500320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.500350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.500673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.500709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.501071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.501102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.501455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.501486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.501716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.501749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.502069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.502101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.502453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.502483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.502842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.502881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.503265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.503295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.503612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.503642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.503998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.504030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.504395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.504425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.504765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.504795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.505144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.505176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.505521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.505551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.505920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.505952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.254 [2024-11-20 07:31:03.506287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.254 [2024-11-20 07:31:03.506316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.254 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.506649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.506678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.507029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.507061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.507426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.507456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.507806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.507836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.508196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.508227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.508594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.508625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.508975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.509006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.509351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.509381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.509705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.509735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.510063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.510094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.510439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.510469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.510831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.510871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.511192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.511224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.511591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.511622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.511952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.511985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.512369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.512400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.512764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.512795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.513203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.513235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.513543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.513573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.513916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.513947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.514277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.514307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.514628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.514657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.515005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.515038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.515399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.515430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.515743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.515780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.516128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.516159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.516508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.516541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.516888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.516919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.517267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.517296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.517686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.517715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.517955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.517989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.518351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.518382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.518624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.518657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.518979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.519010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.519357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.519387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.255 [2024-11-20 07:31:03.519752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.255 [2024-11-20 07:31:03.519782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.255 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.520001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.520031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.520410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.520440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.520754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.520785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.521121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.521154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.521497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.521527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.521889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.521921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.522269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.522300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.522643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.522673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.523030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.523061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.523372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.523403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.523746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.523776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.524110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.524141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.524502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.524532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.524852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.524893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.525256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.525287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.525643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.525673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.526031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.526062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.526408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.526438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.526788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.526818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.527134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.527165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.527519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.527549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.527898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.527929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.528262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.528292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.528658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.528689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.529033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.529064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.529408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.529437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.529753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.529783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.530115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.530149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.530495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.530526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.530749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.530783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.531117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.531150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.531515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.531545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.531873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.531905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.532247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.532278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.532687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.532717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.533065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.533096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.256 qpair failed and we were unable to recover it. 00:30:29.256 [2024-11-20 07:31:03.533441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.256 [2024-11-20 07:31:03.533472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.533697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.533729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.534075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.534107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.534460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.534490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.534832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.534873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.535206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.535236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.535587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.535617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.535950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.535983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.536327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.536357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.536715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.536745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.537099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.537132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.537483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.537513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.537859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.537900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.538243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.538273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.538622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.538652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.539028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.539060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.539473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.539503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.539851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.539891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.540236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.540267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.540609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.540645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.540979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.541011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.541260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.541290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.541633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.541662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.542020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.542053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.542463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.542494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.542724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.542757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.543085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.543116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.543439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.543470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.543819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.543849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.544196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.544227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.544572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.544602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.544934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.544967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.545361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.545391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.545786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.545817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.546249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.546281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.546626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.546658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.546891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.546926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.257 [2024-11-20 07:31:03.547259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.257 [2024-11-20 07:31:03.547290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.257 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.547645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.547674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.547909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.547939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.548303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.548333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.548689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.548719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.549054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.549085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.549424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.549454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.549766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.549796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.550156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.550188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.550537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.550569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.550917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.550948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.551290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.551320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.551647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.551677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.552029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.552060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.552403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.552433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.552777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.552809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.553160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.553192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.553562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.553593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.553943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.553974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.554327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.554357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.554581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.554614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.554938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.554969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.555274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.555310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.555651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.555682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.556017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.556048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.556365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.556394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.556732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.556764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.557198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.557230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.557526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.557557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.557902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.557932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.558267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.558296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.558644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.258 [2024-11-20 07:31:03.558674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.258 qpair failed and we were unable to recover it. 00:30:29.258 [2024-11-20 07:31:03.559018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.559049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.559401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.559431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.559754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.559784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.560133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.560164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.560494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.560525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.560845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.560884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.561227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.561257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.561610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.561640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.561996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.562029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.562345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.562375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.562717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.562749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.563088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.563119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.563444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.563474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.563827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.563857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.564246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.564277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.564630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.564660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.565003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.565035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.565412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.565443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.565791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.565821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.566196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.566227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.566581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.566612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.566921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.566953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.567288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.567318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.567675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.567705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.568028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.568060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.568271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.568302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.568679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.568710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.569042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.569074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.569428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.569459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.569803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.569833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.570230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.570268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.570606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.570637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.570996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.571027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.571358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.571389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.571618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.571649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.571895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.571928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.572271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.259 [2024-11-20 07:31:03.572301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.259 qpair failed and we were unable to recover it. 00:30:29.259 [2024-11-20 07:31:03.572628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.572659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.572984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.573015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.573377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.573407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.573638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.573671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.573988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.574020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.574381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.574412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.574753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.574783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.575132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.575165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.575404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.575434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.575781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.575810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.576177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.576209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.576563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.576593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.576937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.576969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.577345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.577375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.577719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.577750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.578110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.578141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.578464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.578494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.578852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.578894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.579247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.579276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.579587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.579617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.579971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.580003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.580250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.580282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.580641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.580673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.580990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.581022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.581373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.581403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.581633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.581666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.582061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.582093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.582430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.582460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.582824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.582854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.583095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.583126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.583458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.583488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.583849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.583888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.584196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.584225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.584583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.584619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.584973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.585005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.585371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.585403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.585723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.585753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.260 qpair failed and we were unable to recover it. 00:30:29.260 [2024-11-20 07:31:03.586099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.260 [2024-11-20 07:31:03.586130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.586490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.586520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.586879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.586912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.587294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.587324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.587574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.587604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.587947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.587978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.588211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.588241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.588599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.588629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.588983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.589014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.589377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.589406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.591052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.591108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.591477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.591509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.591856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.591901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.592255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.592285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.592512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.592544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.592919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.592951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.593324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.593354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.593694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.593725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.593969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.594004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.594357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.594388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.594623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.594657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.595032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.595064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.595373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.595403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.595755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.595786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.596118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.596149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.596503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.596534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.596883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.596916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.597275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.597306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.597658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.597688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.598033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.598064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.598427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.598457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.598812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.598843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.599135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.599169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.599540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.599572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.599915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.599949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.600313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.600344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.600698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.600736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.601067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.261 [2024-11-20 07:31:03.601100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.261 qpair failed and we were unable to recover it. 00:30:29.261 [2024-11-20 07:31:03.601473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.601504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.601875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.601908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.602261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.602291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.602646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.602677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.603044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.603075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.603443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.603472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.603811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.603841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.604183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.604214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.604508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.604540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.604893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.604924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.605265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.605295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.605657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.605688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.606071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.606104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.606429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.606459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.606802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.606832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.607082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.607116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.607359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.607390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.607755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.607784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.608132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.608164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.608501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.608532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.608857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.608899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.609254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.609284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.609655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.609685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.610025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.610056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.610404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.610434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.610786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.610816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.611173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.611206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.611431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.611464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.611795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.611825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.612068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.612102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.612492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.612522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.612908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.612939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.613286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.613317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.613690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.613719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.614137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.614169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.614499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.614529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.614875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.614907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.262 qpair failed and we were unable to recover it. 00:30:29.262 [2024-11-20 07:31:03.615246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.262 [2024-11-20 07:31:03.615277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.615606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.615643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.615974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.616006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.616343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.616373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.616714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.616744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.617064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.617096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.617458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.617488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.617898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.617930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.618258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.618294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.618633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.618664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.618913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.618944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.619310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.619341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.619698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.619727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.620084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.620116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.620465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.620495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.620851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.620891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.621222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.621252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.622961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.623018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.623390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.623423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.625582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.625640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.626030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.626065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.626417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.626449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.626755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.626786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.627134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.627165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.627521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.627552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.627799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.627832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.628186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.628218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.628588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.628618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.263 [2024-11-20 07:31:03.628970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.263 [2024-11-20 07:31:03.629003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.263 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.629368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.629398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.629759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.629789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.630118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.630149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.630503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.630533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.630746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.630778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.631112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.631144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.631491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.631522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.631844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.631896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.632257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.632287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.632624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.632655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.632900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.632935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.633315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.633345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.633695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.633732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.633954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.633989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.634312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.634343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.634681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.634711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.635037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.635070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.635432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.635462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.635798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.635828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.636179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.636211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.636562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.636592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.636943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.636979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.637361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.637392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.637729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.637758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.638118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.638150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.638506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.638535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.638900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.638932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.639229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.639259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.639609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.639638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.639976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.640008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.640347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.640378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.640736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.640766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.641105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.641139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.641481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.641511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.641888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.641920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.642265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.642297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.642706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.264 [2024-11-20 07:31:03.642735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.264 qpair failed and we were unable to recover it. 00:30:29.264 [2024-11-20 07:31:03.643091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.643124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.643472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.643503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.643879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.643912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.644193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.644223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.644590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.644621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.644855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.644896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.645257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.645287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.645662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.645692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.646014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.646046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.646421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.646454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.646691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.646722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.647045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.647076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.647442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.647472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.647831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.647861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.648097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.648130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.648372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.648408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.648772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.648802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.649133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.649165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.649471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.649500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.649850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.649917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.650240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.650270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.650629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.650659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.650878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.650912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.651240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.651270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.651617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.651647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.651994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.652026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.652371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.652401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.652772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.652802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.653163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.653194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.653517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.653548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.653922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.653955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.654341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.654372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.654722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.654753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.654982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.655014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.655356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.655388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.655610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.655642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.656015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.265 [2024-11-20 07:31:03.656046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.265 qpair failed and we were unable to recover it. 00:30:29.265 [2024-11-20 07:31:03.656362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.656393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.656711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.656741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.657090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.657121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.657470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.657500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.657846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.657888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.658256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.658291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.658609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.658638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.658981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.659013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.659382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.659412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.659774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.659805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.660128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.660161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.660515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.660544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.660926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.660957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.661302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.661332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.661645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.661676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.661993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.662026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.663673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.663729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.664064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.664098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.664333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.664374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.664607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.664637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.664988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.665020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.665406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.665438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.665785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.665818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.666183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.666215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.666576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.666606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.666956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.666987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.667340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.667370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.667717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.667746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.668100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.668132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.668491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.668522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.668883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.668915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.669301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.669331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.669684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.669714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.670077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.670109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.670468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.670499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.670852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.670911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.671302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.266 [2024-11-20 07:31:03.671332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.266 qpair failed and we were unable to recover it. 00:30:29.266 [2024-11-20 07:31:03.671654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.671684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.672053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.672086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.672403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.672433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.672773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.672803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.673179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.673210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.673575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.673606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.673951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.673984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.674299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.674331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.674653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.674684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.675039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.675071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.675463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.675493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.675809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.675839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.676196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.676227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.676582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.676613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.676974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.677004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.677367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.677398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.677748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.677778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.678111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.678143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.678487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.678518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.678882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.678913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.679259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.679291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.679609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.679645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.681170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.681225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.681502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.681535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.681775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.681811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.682171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.682204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.682561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.682593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.682927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.682960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.683331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.683362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.683721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.683751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.684109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.684145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.684447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.267 [2024-11-20 07:31:03.684477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.267 qpair failed and we were unable to recover it. 00:30:29.267 [2024-11-20 07:31:03.684831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.684871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.685295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.685325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.685640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.685671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.686053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.686085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.686427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.686458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.686790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.686821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.687179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.687209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.687558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.687589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.687948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.687980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.688347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.688378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.688741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.688772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.689127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.689159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.689528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.689560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.689906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.689938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.690317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.690350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.690707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.690737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.691104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.691137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.691539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.691570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.691828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.691860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.692275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.692307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.692671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.692701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.693047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.693080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.693442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.693473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.693830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.693860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.694254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.694284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.694634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.694664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.695005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.695037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.695303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.695332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.695687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.695717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.696114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.696155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.696502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.268 [2024-11-20 07:31:03.696534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.268 qpair failed and we were unable to recover it. 00:30:29.268 [2024-11-20 07:31:03.696883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.696916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.697250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.697281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.697627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.697659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.698001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.698033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.698419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.698450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.698803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.698835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.699073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.699104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.699459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.699489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.699846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.699886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.700234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.700266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.700521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.700555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.700922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.700954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.701224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.701254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.701578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.701609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.701980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.702012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.702393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.702424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.702752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.702783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.703048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.703079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.703440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.703471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.703839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.703894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.704259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.704290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.704658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.704689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.704924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.704956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.705338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.705369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.705707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.705739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.706142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.706176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 [2024-11-20 07:31:03.706514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.269 [2024-11-20 07:31:03.706545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.269 qpair failed and we were unable to recover it. 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Write completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Write completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Write completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Write completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Write completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Write completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Write completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Write completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Write completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Write completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Write completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.269 Read completed with error (sct=0, sc=8) 00:30:29.269 starting I/O failed 00:30:29.270 Write completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 [2024-11-20 07:31:03.706847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Write completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Write completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Write completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Write completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Write completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Write completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Write completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Write completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Write completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Write completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 Read completed with error (sct=0, sc=8) 00:30:29.270 starting I/O failed 00:30:29.270 [2024-11-20 07:31:03.707665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.270 [2024-11-20 07:31:03.708233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.708349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.708799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.708837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.709338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.709432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.709886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.709928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.710297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.710329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.710672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.710703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.710901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.710937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.711218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.711253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.711618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.711649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.712005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.712037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.712417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.712446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.712681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.712713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.713090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.713122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.713469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.713499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.713843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.713886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.714169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.714199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.714522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.714553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.714920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.714950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.715327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.715358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.715596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.715626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.715994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.716024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.716374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.716406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.716756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.716787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.717169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.717199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.270 [2024-11-20 07:31:03.717503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.270 [2024-11-20 07:31:03.717540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.270 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.717793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.717827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.718108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.718143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.718508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.718539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.718903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.718934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.719288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.719319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.719642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.719672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.720038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.720072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.720328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.720357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.720728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.720758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.721116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.721148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.721378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.721406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.721738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.721768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.722115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.722147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.722388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.722421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.722785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.722815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.723168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.723201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.723535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.723566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.723914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.723946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.724314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.724345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.724708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.724740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.725110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.725142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.725363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.725393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.725787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.725819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.726147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.726179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.726526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.726556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.726904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.726936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.727328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.727359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.727711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.727741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.728084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.728117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.728343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.728372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.728718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.728748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.729088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.729118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.729452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.729483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.729903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.729935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.730246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.730274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.730635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.730665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.730997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.271 [2024-11-20 07:31:03.731034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.271 qpair failed and we were unable to recover it. 00:30:29.271 [2024-11-20 07:31:03.731265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.731297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.731635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.731666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.732039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.732078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.732423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.732453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.732801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.732831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.733199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.733230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.733571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.733601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.733906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.733937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.734304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.734335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.734549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.734580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.734942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.734974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.735347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.735377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.735729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.735760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.736104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.736135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.736514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.736545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.736914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.736946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.737313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.737344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.737680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.737711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.737938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.737968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.738208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.738237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.738595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.738625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.738985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.739017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.739367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.739397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.739622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.739655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.739986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.740018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.740364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.740395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.740727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.740759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.741092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.741123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.741353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.741386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.741604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.272 [2024-11-20 07:31:03.741638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.272 qpair failed and we were unable to recover it. 00:30:29.272 [2024-11-20 07:31:03.741995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.742026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.742375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.742405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.742728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.742757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.743102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.743133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.743477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.743507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.743903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.743936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.744279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.744310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.744560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.744591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.744945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.744977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.745194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.745223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.745615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.745644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.746006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.746037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.746425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.746461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.746679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.746709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.747090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.747121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.747464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.747495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.747813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.747843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.748254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.748286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.748678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.748709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.749067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.749100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.749457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.749489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.749729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.749760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.750056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.750089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.750473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.750504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.750821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.750853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.751212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.751245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.751633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.273 [2024-11-20 07:31:03.751665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.273 qpair failed and we were unable to recover it. 00:30:29.273 [2024-11-20 07:31:03.752025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.752057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.752433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.752463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.752812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.752843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.753201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.753233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.753623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.753653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.754022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.754054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.754394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.754424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.754774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.754804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.755167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.755199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.755524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.755556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.755787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.755821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.756071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.756101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.756455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.756485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.756831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.756869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.757107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.757137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.757496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.757526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.757889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.757923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.758303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.758332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.758682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.758713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.759055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.759084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.759451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.759483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.759782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.759813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.760200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.760231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.760585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.760616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.760936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.760967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.761348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.761384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.274 [2024-11-20 07:31:03.761624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.274 [2024-11-20 07:31:03.761657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.274 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.761990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.762023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.762396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.762428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.762822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.762851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.762960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.762987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.763323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.763352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.763717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.763748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.764000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.764031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.764388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.764419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.764754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.764785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.765119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.765149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.765509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.765539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.765751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.765781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe60000b90 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.766013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e92020 is same with the state(6) to be set 00:30:29.275 [2024-11-20 07:31:03.766675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.766723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.767180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.767226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.767583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.767597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.767925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.767939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.768003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.768015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.768291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.768305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.768625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.768636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.768989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.769014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.769205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.769216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.769484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.769496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.769814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.769827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.770231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.770244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.770578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.770590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.770930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.275 [2024-11-20 07:31:03.770943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.275 qpair failed and we were unable to recover it. 00:30:29.275 [2024-11-20 07:31:03.771262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.771274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.771453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.771464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.771784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.771797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.772120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.772134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.772470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.772482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.772825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.772837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.773173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.773185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.773524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.773536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.773855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.773882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.774176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.774188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.774526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.774538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.774664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.774674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.774980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.774996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.775316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.775328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.775665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.775678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.776014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.776026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.776336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.776348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.776682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.776694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.777020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.777032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.777350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.777363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.777705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.777717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.777855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.777870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.778141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.778154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.778468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.778480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.778704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.778715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.779036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.779049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.779347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.779361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.779643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.779655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.276 [2024-11-20 07:31:03.779876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.276 [2024-11-20 07:31:03.779889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.276 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.780192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.780203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.780501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.780514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.780739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.780751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.781082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.781096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.781436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.781449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.781781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.781794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.782132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.782144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.782488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.782500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.782835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.782847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.783166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.783179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.783522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.783536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.783874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.783887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.784195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.784207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.784541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.784553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.784758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.784769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.785024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.785035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.785368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.785380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.785715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.785728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.786033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.786046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.786336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.786349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.786674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.786685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.787014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.787026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.787363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.787377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.787687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.787700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.788045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.788058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.788391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.788403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.788699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.788711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.788902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.788916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.789224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.789235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.789573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.789585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.789922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.789936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.790273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.790285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.790595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.790608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.790897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.790909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.791209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.791220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.791537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.791549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.791871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.277 [2024-11-20 07:31:03.791883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.277 qpair failed and we were unable to recover it. 00:30:29.277 [2024-11-20 07:31:03.792204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.792217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.792516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.792527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.792890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.792903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.793212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.793223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.793540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.793552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.793858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.793876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.794160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.794172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.794501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.794514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.794817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.794829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.795013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.795025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.795302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.795317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.795611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.795623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.795950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.795963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.796170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.796182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.796463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.796477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.796813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.796826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.797165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.797177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.797565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.797578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.797879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.797892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.798234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.798246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.798577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.798589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.798887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.798900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.799235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.799247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.799456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.799469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.799793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.799805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.800110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.800123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.800453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.800465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.800749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.800763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.801032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.801044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.801352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.801365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.801664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.801675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.278 [2024-11-20 07:31:03.802009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.278 [2024-11-20 07:31:03.802022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.278 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.802356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.802368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.802676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.802690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.803019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.803031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.803338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.803350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.803683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.803695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.804005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.804018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.804339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.804350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.804481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.804492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.804814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.804826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.805142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.805158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.805495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.805508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.805839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.805852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.806035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.806048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.806352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.806363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.806676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.806688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.807020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.807031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.807335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.807348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.807672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.807683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.807994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.808006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.808347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.808359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.808725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.808737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.809047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.809059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.809264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.809277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.809562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.809576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.809911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.809923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.810216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.810228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.810441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.810452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.810795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.810808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.811117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.811129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.811437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.811450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.811762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.811775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.812067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.812079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.812382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.812395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.812704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.812716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.813123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.813138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.813445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.813459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.813788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.813804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.279 qpair failed and we were unable to recover it. 00:30:29.279 [2024-11-20 07:31:03.813978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.279 [2024-11-20 07:31:03.813990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.814269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.814280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.814566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.814579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.814914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.814927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.815252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.815264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.815594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.815605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.815953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.815966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.816291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.816302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.816618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.816629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.816938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.816950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.817241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.817253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.817590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.817603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.817824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.817835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.818018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.818031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.818331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.818343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.818646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.818658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.818989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.819002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.819341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.819353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.819653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.819665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.819991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.820003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.820334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.820347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.820678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.820689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.820885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.820899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.821280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.821291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.821579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.821590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.821897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.821909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.822223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.822236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.822574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.822587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.822918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.822930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.823293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.823304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.823645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.823657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.823962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.823973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.824312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.824324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.824630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.824642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.824975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.824988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.825205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.825216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.825527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.825540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.280 [2024-11-20 07:31:03.825871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.280 [2024-11-20 07:31:03.825884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.280 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.826243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.826254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.826521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.826532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.826874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.826888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.827222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.827233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.827534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.827547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.827857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.827873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.828199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.828211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.828554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.828566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.828764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.828776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.829102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.829115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.829435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.829448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.829782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.829793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.830118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.830131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.830434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.830445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.830795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.830807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.831121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.831132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.831478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.831490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.831802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.831814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.832125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.832138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.832438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.832451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.832786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.832797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.833109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.833123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.833420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.833432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.833741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.833754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.833944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.833957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.834302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.834314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.834641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.834652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.834983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.834996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.835310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.835322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.835633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.835649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.836052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.281 [2024-11-20 07:31:03.836064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.281 qpair failed and we were unable to recover it. 00:30:29.281 [2024-11-20 07:31:03.836368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.836380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.836719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.836730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.836912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.836925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.837226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.837239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.837559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.837572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.837980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.837992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.838289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.838302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.838631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.838643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.838946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.838958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.839254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.839265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.839616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.839628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.839930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.839941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.840253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.840265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.840596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.840607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.840920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.840932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.841237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.841249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.841582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.841594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.841803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.841814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.842123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.842135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.842503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.842514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.842818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.842829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.843134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.843146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.843479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.843491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.843778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.843790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.844126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.844138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.844451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.844466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.844795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.844807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.845116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.845129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.845353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.845365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.845759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.845771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.282 [2024-11-20 07:31:03.846069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.282 [2024-11-20 07:31:03.846083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.282 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.846398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.846411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.846718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.846732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.847070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.847082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.847248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.847260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.847604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.847616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.847929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.847942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.848272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.848283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.848611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.848622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.848962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.848975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.849311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.849323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.849631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.849644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.849979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.849991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.850286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.850299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.850600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.850612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.850815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.850826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.851204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.851216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.851529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.851542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.851873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.851885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.852243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.852254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.852568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.852579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.852911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.852924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.853242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.853254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.853566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.853579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.853909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.283 [2024-11-20 07:31:03.853922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.283 qpair failed and we were unable to recover it. 00:30:29.283 [2024-11-20 07:31:03.854092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.854104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.854402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.854414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.854718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.854730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.855037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.855051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.855378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.855390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.855727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.855738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.856072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.856084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.856384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.856395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.856733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.856745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.857086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.857099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.857424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.857435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.857763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.857777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.858023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.858035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.858361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.858373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.858706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.858717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.859016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.859028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.859365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.859377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.859711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.859723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.859942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.859954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.860256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.860267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.860562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.860573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.860875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.860888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.861204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.861217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.861545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.861557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.861860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.861876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.862187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.862198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.862486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.862499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.862797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.862808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.863120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.863134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.863470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.863483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.863814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.863826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.864195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.864207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.864539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.864551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.864758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.864769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.284 qpair failed and we were unable to recover it. 00:30:29.284 [2024-11-20 07:31:03.865070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.284 [2024-11-20 07:31:03.865082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.865374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.865387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.865619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.865630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.865937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.865950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.866277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.866291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.866602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.866615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.866918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.866931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.867238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.867249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.867557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.867569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.867899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.867912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.868119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.868134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.868412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.868424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.869177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.869204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.869509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.869522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.869844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.869856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.870189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.870201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.870538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.870549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.870859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.870876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.871178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.871190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.871521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.871534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.871840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.871852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.872188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.872200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.872536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.872548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.872856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.872873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.873181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.873192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.873525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.873537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.873841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.873852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.874170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.874182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.874499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.874510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.874685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.874698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.875022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.875033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.875325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.875340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.875640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.875652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.875949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.875962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.876257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.876268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.876596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.876607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.876955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.876967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.877298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.285 [2024-11-20 07:31:03.877310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.285 qpair failed and we were unable to recover it. 00:30:29.285 [2024-11-20 07:31:03.877608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.877620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.877893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.877905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.878219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.878231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.878575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.878586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.878909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.878921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.879258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.879269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.879583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.879595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.879801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.879812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.880091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.880104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.880435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.880446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.880785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.880797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.881171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.881183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.881397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.881409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.881738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.881751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.882058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.882070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.882394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.882406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.882715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.882727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.883081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.883093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.883422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.883434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.883809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.883821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.884004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.884017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.884353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.884364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.884673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.884686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.885042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.885054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.885366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.885378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.885708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.885719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.885920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.885933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.886235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.886247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.886579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.886590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.886926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.886939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.887277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.887289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.887618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.887630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.887944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.887956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.888258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.888271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.888604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.888616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.888821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.888832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.889049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.889063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.286 [2024-11-20 07:31:03.889388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.286 [2024-11-20 07:31:03.889399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.286 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.889733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.889746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.889963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.889974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.890240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.890251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.890588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.890600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.890929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.890943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.891272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.891284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.891575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.891586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.891920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.891932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.892211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.892222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.892557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.892568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.892878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.892890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.893259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.893270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.893598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.893609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.893932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.893943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.894316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.894327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.894662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.894674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.895006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.895017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.895330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.895342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.895677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.895689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.896023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.896035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.896404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.896416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.896720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.896732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.896905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.896918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.897246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.897258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.897592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.897604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.897930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.897942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.898261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.898273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.898615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.898627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.898931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.898944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.899120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.899133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.899272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.899283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.899622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.899634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.899855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.899873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.900169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.900180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.900508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.900519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.900840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.900851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.901040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.287 [2024-11-20 07:31:03.901051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.287 qpair failed and we were unable to recover it. 00:30:29.287 [2024-11-20 07:31:03.901373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.901384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.901719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.901731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.901927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.901940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.902287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.902299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.902542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.902553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.902774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.902784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.903122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.903134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.903510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.903522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.903854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.903876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.904171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.904182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.904491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.904502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.904833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.904845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.905172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.905184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.905495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.905509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.905836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.905849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.906025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.906038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.906321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.906333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.906625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.906637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.906972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.906984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.907303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.907315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.907614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.907626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.907934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.907946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.908274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.908285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.908584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.908596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.908947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.908959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.909134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.909145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.909464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.909475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.909806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.909818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.910120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.910132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.910469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.910480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.910787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.910798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.910991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.911004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.288 [2024-11-20 07:31:03.911256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-11-20 07:31:03.911267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.288 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.911596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.911607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.911927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.911941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.912287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.912299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.912632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.912644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.912961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.912973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.913258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.913269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.913602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.913613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.913925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.913940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.914267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.914279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.914601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.914613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.914875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.914887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.915182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.915195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.915494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.915505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.915819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.915832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.916176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.916188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.916558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.916569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.916880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.916892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.917174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.917185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.917498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.917510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.917815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.917826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.918131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.918144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.918455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.918468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.918738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.918751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.919036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.919048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.919341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.919353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.919632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.919644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.919970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.919983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.920292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.920303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.920474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.920486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.920815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.920827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.921138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.921152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.921480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.921492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.921826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.921838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.922148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.922160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.922466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.922478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.922808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.922819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.923185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.923197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.289 qpair failed and we were unable to recover it. 00:30:29.289 [2024-11-20 07:31:03.923511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-11-20 07:31:03.923524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.923856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.923872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.924188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.924200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.924253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.924265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.924571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.924584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.925063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.925075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.925377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.925390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.925723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.925735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.925919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.925931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.926262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.926274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.926583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.926595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.926934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.926946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.927259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.927271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.927586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.927597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.927772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.927783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.928049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.928061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.928389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.928400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.928730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.928742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.928924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.928938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.929271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.929283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.929614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.929626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.929927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.929940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.930260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.930272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.930602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.930614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.930919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.930932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.931269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.931281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.931600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.931611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.931871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.931883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.932063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.932075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.932380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.932392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.932724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.932736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.933071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.933083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.933399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.933412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.933728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.933739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.934045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.934057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.934358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.934369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.934699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.934710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.935012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.935023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.290 qpair failed and we were unable to recover it. 00:30:29.290 [2024-11-20 07:31:03.935365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-11-20 07:31:03.935379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.935684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.935696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.936004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.936016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.936352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.936364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.936550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.936562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.936896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.936910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.937222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.937233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.937560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.937571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.937880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.937893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.938222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.938233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.938559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.938571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.938878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.938889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.939200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.939211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.939507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.939519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.939850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.939867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.940192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.940203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.940498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.940509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.940841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.940853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.941174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.941186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.941386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.941397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.941707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.941718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.941923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.941935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.942140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.942151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.942474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.942485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.942694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.942705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.942983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.942995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.943323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.943335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.943667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.943681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.944012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.944023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.944369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.944380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.944689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.944700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.945032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.945044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.945350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.945362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.945700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.945711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.946042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.946054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.946375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.946386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.946723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.946735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.947052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.947064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.947379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.947391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.947724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.947735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.948036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.948047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.291 [2024-11-20 07:31:03.948402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.291 [2024-11-20 07:31:03.948413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.291 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.948703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.948715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.949036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.949048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.949351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.949362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.949668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.949679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.949995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.950007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.950316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.950327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.950620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.950632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.950963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.950975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.951300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.951312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.951625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.951636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.951975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.951987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.952297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.952308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.952664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.952677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.953043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.953054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.953359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.953372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.953711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.953723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.954033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.954044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.954343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.954356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.954674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.954686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.955041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.955053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.955380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.955391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.955731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.955742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.956081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.956093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.956299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.956311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.956608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.956619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.956932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.956943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.957218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.957230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.957549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.957561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.957871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.957882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.958227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.958239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.958577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.958589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.958899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.958910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.959144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.959155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.959478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.959490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.959705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.959717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.960042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.960054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.960375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.960387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.960769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.960780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.961085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.292 [2024-11-20 07:31:03.961097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.292 qpair failed and we were unable to recover it. 00:30:29.292 [2024-11-20 07:31:03.961436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.961448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.961769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.961780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.962115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.962128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.962465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.962477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.962535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.962545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.962902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.962914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.963203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.963214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.963523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.963534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.963753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.963764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.963900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.963910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.964239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.964250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.964581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.964593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.964896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.964909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.965224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.965235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.965474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.965484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.965859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.965918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.966224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.966235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.966567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.966579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.966871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.966882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.967186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.967197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.967554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.967566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.967799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.967811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.968136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.968150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.968465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.968477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.968785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.968796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.969105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.969116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.969438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.969450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.969757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.969769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.970079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.970092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.970417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.970430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.970801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.970812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.971018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.971029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.971357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.971368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.971675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.971687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.972003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.972015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.972348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.972361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.293 qpair failed and we were unable to recover it. 00:30:29.293 [2024-11-20 07:31:03.972740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.293 [2024-11-20 07:31:03.972752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.973077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.973088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.973441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.973452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.973625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.973636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.973804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.973814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.974117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.974130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.974503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.974515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.974793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.974805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.975122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.975135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.975450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.975463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.975802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.975814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.976119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.976131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.976317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.976328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.976520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.976531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.976833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.976845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.977058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.977070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.977373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.977384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.977686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.977699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.977881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.977894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.978198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.978209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.978428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.978438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.978739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.978751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.979047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.979059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.979197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.979209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.979519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.979533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.979834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.979846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.980160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.980171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.980469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.980480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.980776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.980787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.981103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.981117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.981426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.981438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.981751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.981763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.981951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.981965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.982236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.982247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.982458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.982469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.982769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.982780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.983139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.983151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.983345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.983355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.983638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.983649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.983970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.983982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.984318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.984329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.984598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.984609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.984919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.294 [2024-11-20 07:31:03.984930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.294 qpair failed and we were unable to recover it. 00:30:29.294 [2024-11-20 07:31:03.985251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.295 [2024-11-20 07:31:03.985261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.295 qpair failed and we were unable to recover it. 00:30:29.295 [2024-11-20 07:31:03.985447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.295 [2024-11-20 07:31:03.985459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.295 qpair failed and we were unable to recover it. 00:30:29.295 [2024-11-20 07:31:03.985746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.295 [2024-11-20 07:31:03.985758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.295 qpair failed and we were unable to recover it. 00:30:29.295 [2024-11-20 07:31:03.985956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.295 [2024-11-20 07:31:03.985968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.295 qpair failed and we were unable to recover it. 00:30:29.295 [2024-11-20 07:31:03.986298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.295 [2024-11-20 07:31:03.986311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.295 qpair failed and we were unable to recover it. 00:30:29.295 [2024-11-20 07:31:03.986713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.295 [2024-11-20 07:31:03.986725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.295 qpair failed and we were unable to recover it. 00:30:29.295 [2024-11-20 07:31:03.986935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.295 [2024-11-20 07:31:03.986947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.295 qpair failed and we were unable to recover it. 00:30:29.295 [2024-11-20 07:31:03.987170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.295 [2024-11-20 07:31:03.987181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.295 qpair failed and we were unable to recover it. 00:30:29.571 [2024-11-20 07:31:03.987489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.571 [2024-11-20 07:31:03.987502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.571 qpair failed and we were unable to recover it. 00:30:29.571 [2024-11-20 07:31:03.987811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.571 [2024-11-20 07:31:03.987824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.571 qpair failed and we were unable to recover it. 00:30:29.571 [2024-11-20 07:31:03.988131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.571 [2024-11-20 07:31:03.988144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.571 qpair failed and we were unable to recover it. 00:30:29.571 [2024-11-20 07:31:03.988548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.571 [2024-11-20 07:31:03.988560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.988897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.988909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.989077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.989089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.989422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.989434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.989729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.989740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.989935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.989946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.990111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.990121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.990321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.990332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.990547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.990559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.990886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.990898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.991189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.991200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.991531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.991542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.991857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.991880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.992206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.992217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.992513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.992524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.992690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.992703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.993024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.993036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.993345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.993356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.993622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.993633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.993942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.993953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.994116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.994128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.994448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.994460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.994794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.994806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.995101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.995114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.995445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.995457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.995789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.995801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.996179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.996192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.996498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.996510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.996818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.996831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.997152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.997165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.997484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.997496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.997797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.997809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.998092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.998105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.998328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.998340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.998644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.998656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.998976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.998989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.999310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.999322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.572 [2024-11-20 07:31:03.999660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.572 [2024-11-20 07:31:03.999671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.572 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:03.999987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:03.999999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.000312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.000324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.000630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.000641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.000960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.000973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.001284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.001296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.001600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.001612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.001962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.001974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.002283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.002294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.002601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.002615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.002931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.002942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.003258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.003269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.003567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.003580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.003913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.003924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.004262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.004274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.004443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.004456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.004767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.004778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.005155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.005167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.005505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.005517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.005847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.005859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.006075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.006087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.006380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.006391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.006699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.006711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.007020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.007032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.007367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.007380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.007708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.007718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.007962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.007974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.008306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.008317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.008448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.008459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.008664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.008675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.008986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.008998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.009291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.009302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.009507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.009518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.009787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.009798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.010079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.010091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.010417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.010429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.010761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.010776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.011124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.573 [2024-11-20 07:31:04.011136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.573 qpair failed and we were unable to recover it. 00:30:29.573 [2024-11-20 07:31:04.011446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.011457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.011695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.011708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.011998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.012010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.012331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.012343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.012542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.012554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.012828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.012839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.013180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.013191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.013423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.013433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.013628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.013639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.013739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.013749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.013974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.013986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.014304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.014316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.014524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.014536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.014844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.014855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.015164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.015175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.015490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.015501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.015820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.015832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.016162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.016175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.016372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.016384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.016603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.016615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.016817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.016828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.017141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.017153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.017338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.017350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.017649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.017662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.017983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.017995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.018208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.018221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.018514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.018526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.018842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.018854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.019166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.019179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.019563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.019575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.019877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.019889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.020090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.020101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.020339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.020350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.020656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.020667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.020946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.020957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.021248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.021260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.021547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.021559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.021767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.574 [2024-11-20 07:31:04.021778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.574 qpair failed and we were unable to recover it. 00:30:29.574 [2024-11-20 07:31:04.022082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.022093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.022461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.022474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.022778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.022790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.023081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.023092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.023405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.023417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.023688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.023699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.024003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.024015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.024295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.024306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.024616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.024627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.024828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.024840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.025015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.025027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.025313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.025324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.025632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.025643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.025972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.025983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.026292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.026303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.026646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.026658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.026989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.027000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.027308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.027319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.027663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.027675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.028031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.028043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.028346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.028357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.028692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.028703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.029002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.029014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.029333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.029344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.029677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.029689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.030027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.030039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.030352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.030364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.030668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.030680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.030986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.030998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.031353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.031364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.031699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.031711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.032026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.032038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.032361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.032373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.032567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.032580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.032886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.032898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.033209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.033220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.033447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.033457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.033767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.575 [2024-11-20 07:31:04.033779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.575 qpair failed and we were unable to recover it. 00:30:29.575 [2024-11-20 07:31:04.034051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.034062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.034367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.034379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.034691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.034703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.035031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.035043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.035341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.035352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.035649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.035660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.035984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.035996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.036329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.036340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.036665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.036677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.037007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.037019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.037325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.037346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.037642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.037653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.037967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.037979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.038200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.038211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.038411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.038422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.038752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.038763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.039001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.039012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.039318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.039331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.039618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.039629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.039975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.039986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.040189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.040199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.040513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.040525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.040857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.040874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.041205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.041216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.041527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.041539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.041843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.041854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.042147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.042158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.042467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.042478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.042676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.042689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.043031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.043043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.576 [2024-11-20 07:31:04.043424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.576 [2024-11-20 07:31:04.043435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.576 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.043729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.043740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.044084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.044096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.044384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.044395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.044698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.044710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.045009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.045020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.045353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.045365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.045647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.045659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.045971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.045983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.046168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.046178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.046479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.046490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.046802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.046813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.047145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.047156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.047501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.047513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.047822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.047836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.048157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.048168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.048501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.048513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.048820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.048832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.049161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.049173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.049510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.049522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.049824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.049836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.050181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.050193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.050420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.050432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.050754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.050766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.051149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.051161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.051438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.051450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.051778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.051790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.052093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.052107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.052439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.052452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.052779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.052791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.053040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.053052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.053325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.053337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.053652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.053664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.053886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.053899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.054245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.054256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.054592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.054604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.054892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.054903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.055239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.577 [2024-11-20 07:31:04.055251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.577 qpair failed and we were unable to recover it. 00:30:29.577 [2024-11-20 07:31:04.055411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.055423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.055738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.055750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.056070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.056082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.056384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.056395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.056694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.056706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.057026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.057037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.057374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.057385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.057729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.057740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.058069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.058081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.058388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.058400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.058697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.058708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.059035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.059047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.059344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.059355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.059661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.059673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.059955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.059966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.060273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.060284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.060578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.060589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.060873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.060885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.061192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.061203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.061513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.061525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.061817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.061828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.062127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.062139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.062470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.062481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.062759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.062769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.063102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.063113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.063416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.063429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.063764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.063775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.064065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.064076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.064403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.064415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.064705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.064717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.065050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.065062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.065370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.065382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.065667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.065678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.066020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.066033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.066362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.066373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.066671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.066684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.067010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.067022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.067351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.578 [2024-11-20 07:31:04.067363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.578 qpair failed and we were unable to recover it. 00:30:29.578 [2024-11-20 07:31:04.067697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.067709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.068025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.068036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.068374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.068385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.068713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.068724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.069037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.069048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.069360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.069372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.069673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.069685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.069902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.069914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.070216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.070227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.070560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.070571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.070875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.070886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.071194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.071205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.071547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.071558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.071867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.071879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.072207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.072219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.072550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.072560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.072870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.072881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.073189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.073200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.073534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.073546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.073873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.073885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.074185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.074197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.074495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.074507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.074712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.074724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.075045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.075057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.075378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.075390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.075673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.075684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.076002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.076014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.076316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.076327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.076634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.076646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.076946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.076957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.077265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.077276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.077469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.077480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.077803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.077815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.078119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.078133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.078458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.078469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.078793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.078805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.079112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.579 [2024-11-20 07:31:04.079124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.579 qpair failed and we were unable to recover it. 00:30:29.579 [2024-11-20 07:31:04.079432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.079444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.079772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.079783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.080111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.080124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.080446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.080457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.080765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.080777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.081082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.081094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.081277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.081289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.081608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.081619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.081929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.081941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.082224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.082236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.082430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.082442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.082731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.082742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.083042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.083053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.083219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.083231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.083510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.083521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.083831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.083842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.084155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.084166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.084500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.084512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.084819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.084831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.085203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.085215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.085498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.085509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.085813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.085824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.086126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.086138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.086470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.086483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.086788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.086800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.087106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.087118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.087446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.087458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.087748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.087759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.088086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.088098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.088467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.088479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.088694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.088706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.089032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.089044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.089373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.089384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.089718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.089729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.090038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.090051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.090337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.090348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.090569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.090579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.090890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.580 [2024-11-20 07:31:04.090903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.580 qpair failed and we were unable to recover it. 00:30:29.580 [2024-11-20 07:31:04.091264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.091275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.091582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.091593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.091767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.091778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.092074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.092085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.092268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.092280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.092597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.092609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.092814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.092825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.093146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.093158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.093489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.093500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.093833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.093844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.094161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.094172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.094499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.094510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.094803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.094814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.095135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.095146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.095526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.095538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.095870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.095882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.096066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.096078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.096474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.096485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.096813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.096824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.097133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.097145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.097452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.097465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.097793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.097804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.097989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.098000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.098279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.098289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.098548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.098558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.098850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.098870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.099079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.099089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.099390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.099401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.099719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.099732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.099919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.099933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.100274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.100285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.100592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.100605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.100823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.581 [2024-11-20 07:31:04.100834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.581 qpair failed and we were unable to recover it. 00:30:29.581 [2024-11-20 07:31:04.101145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.101156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.101490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.101502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.101832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.101844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.102176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.102188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.102491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.102503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.102831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.102842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.103255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.103268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.103597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.103608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.103978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.103990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.104324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.104335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.104646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.104657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.104853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.104869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.105182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.105193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.105502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.105513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.105876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.105887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.106181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.106194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.106524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.106534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.106837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.106849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.107149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.107161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.107527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.107537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.107838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.107852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.108208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.108220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.108498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.108509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.108818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.108829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.109174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.109186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.109493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.109504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.109871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.109882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.110172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.110185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.110494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.110507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.110808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.110821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.111120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.111131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.111458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.111470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.111773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.111784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.112092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.112104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.112424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.112436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.112762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.112775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.113145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.113158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.582 [2024-11-20 07:31:04.113457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.582 [2024-11-20 07:31:04.113468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.582 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.113799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.113811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.114142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.114154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.114455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.114468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.114770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.114782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.115121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.115133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.115517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.115530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.115829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.115842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.116053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.116066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.116376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.116388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.116686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.116700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.117034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.117047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.117268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.117280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.117505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.117516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.117780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.117791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.118178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.118189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.118488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.118500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.118770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.118781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.119091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.119102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.119433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.119444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.119779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.119791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.120107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.120118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.120424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.120437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.120719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.120731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.121063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.121075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.121389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.121400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.121727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.121740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.122038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.122049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.122363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.122374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.122706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.122717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.123027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.123038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.123357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.123370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.123575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.123586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.123901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.123921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.124232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.124243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.124571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.124581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.124788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.124799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.125126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.125138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.583 [2024-11-20 07:31:04.125433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.583 [2024-11-20 07:31:04.125445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.583 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.125744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.125756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.126079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.126091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.126451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.126462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.126760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.126772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.127072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.127083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.127422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.127434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.127763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.127774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.128078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.128089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.128392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.128403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.128702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.128713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.129017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.129028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.129310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.129330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.129651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.129664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.129986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.129998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.130283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.130295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.130602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.130613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.130914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.130925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.131253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.131264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.131563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.131574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.131880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.131891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.132205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.132217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.132543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.132555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.132872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.132885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.133210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.133221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.133371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.133381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.133675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.133686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.133980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.133992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.134333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.134344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.134644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.134656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.135024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.135036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.135333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.135345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.135643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.135653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.135933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.135945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.136139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.136151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.136425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.136436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.136772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.136785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.136988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.136999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.584 [2024-11-20 07:31:04.137272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.584 [2024-11-20 07:31:04.137283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.584 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.137618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.137631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.138011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.138024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.138205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.138217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.138522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.138533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.138840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.138852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.139192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.139204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.139532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.139543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.139852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.139866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.140109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.140120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.140412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.140423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.140728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.140739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.141034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.141046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.141350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.141362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.141685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.141696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.141999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.142011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.142345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.142356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.142656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.142668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.143024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.143036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.143367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.143379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.143584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.143595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.143838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.143849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.144031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.144044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.144335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.144346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.144675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.144687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.144983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.144994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.145287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.145300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.145618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.145629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.145925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.145938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.146259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.146272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.146580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.146592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.146937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.146948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.147267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.147278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.147591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.147602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.585 [2024-11-20 07:31:04.147940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.585 [2024-11-20 07:31:04.147953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.585 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.148296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.148307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.148632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.148643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.148872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.148883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.149162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.149173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.149474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.149484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.149813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.149825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.150125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.150138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.150439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.150450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.150782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.150793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.151111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.151122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.151426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.151437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.151727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.151740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.152086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.152097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.152384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.152396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.152704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.152715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.153022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.153033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.153344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.153355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.153683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.153695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.154005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.154017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.154348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.154359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.154697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.154709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.155052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.155065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.155365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.155377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.155711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.155721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.156025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.156037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.156341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.156352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.156683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.156694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.157003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.157014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.157340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.157351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.157682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.157693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.158005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.158017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.158330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.158342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.158682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.158693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.159013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.159024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.159353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.159364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.159697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.159708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.160014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.160025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.586 [2024-11-20 07:31:04.160350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.586 [2024-11-20 07:31:04.160361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.586 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.160691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.160702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.161098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.161110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.161413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.161424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.161761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.161772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.162082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.162094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.162403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.162415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.162741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.162753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.163044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.163055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.163373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.163385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.163680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.163691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.163743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.163754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.164034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.164047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.164372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.164384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.164718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.164730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.165063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.165075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.165457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.165468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.165765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.165784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.166082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.166094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.166406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.166418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.166694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.166705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.167004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.167016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.167359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.167371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.167704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.167715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.167983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.167994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.168304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.168315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.168632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.168644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.168933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.168945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.169290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.169303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.169499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.169510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.169835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.169847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.170148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.170159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.170500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.170511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.170810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.170820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.171104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.171116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.171395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.171406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.171735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.171747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.172124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.172135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.587 [2024-11-20 07:31:04.172434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.587 [2024-11-20 07:31:04.172446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.587 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.172773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.172785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.173028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.173040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.173385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.173397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.173726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.173738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.174123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.174135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.174461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.174473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.174774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.174785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.175117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.175129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.175459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.175470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.175772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.175784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.176092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.176103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.176438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.176449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.176757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.176768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.177056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.177070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.177418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.177430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.177736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.177746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.178041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.178053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.178339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.178352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.178700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.178711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.179040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.179052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.179371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.179382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.179708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.179719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.180068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.180080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.180379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.180392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.180741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.180752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.181057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.181069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.181355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.181366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.181662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.181673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.181986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.181998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.182332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.182343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.182641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.182652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.182855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.182879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.183174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.183186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.183485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.183497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.183803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.183814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.184109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.184121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.184449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.588 [2024-11-20 07:31:04.184460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.588 qpair failed and we were unable to recover it. 00:30:29.588 [2024-11-20 07:31:04.184759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.184771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.185052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.185063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.185394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.185406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.185713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.185730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.186056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.186069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.186395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.186407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.186703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.186714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.187031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.187044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.187346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.187357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.187646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.187658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.187945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.187957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.188282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.188293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.188600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.188610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.188890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.188901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.189232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.189244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.189548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.189559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.189897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.189909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.190235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.190248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.190565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.190576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.190900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.190911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.191212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.191223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.191543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.191555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.191848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.191859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.192187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.192200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.192492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.192504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.192824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.192836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.193028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.193039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.193327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.193338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.193646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.193657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.193966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.193978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.194194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.194205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.194445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.194456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.194767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.194779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.195080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.195092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.195413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.195424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.195726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.195737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.195984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.195995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.196319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.589 [2024-11-20 07:31:04.196332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.589 qpair failed and we were unable to recover it. 00:30:29.589 [2024-11-20 07:31:04.196552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.196564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.196879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.196892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.197208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.197220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.197527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.197539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.197834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.197846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.198139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.198150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.198345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.198357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.198638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.198650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.198942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.198954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.199295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.199307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.199602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.199613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.199957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.199970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.200288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.200299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.200603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.200615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.200915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.200927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.201251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.201262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.201565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.201576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.201786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.201797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.202000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.202011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.202337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.202349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.202545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.202556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.202883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.202896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.203272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.203284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.203625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.203637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.203852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.203868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.204153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.204164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.204454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.204466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.204648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.204660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.204836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.204848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.205123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.205135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.205442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.205452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.205790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.205801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.206115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.206126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.590 [2024-11-20 07:31:04.206428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.590 [2024-11-20 07:31:04.206441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.590 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.206751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.206763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.207072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.207084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.207427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.207439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.207747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.207758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.208068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.208080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.208273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.208284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.208495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.208507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.208784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.208796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.209077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.209088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.209358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.209371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.209700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.209712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.210092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.210104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.210403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.210415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.210717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.210729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.211035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.211047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.211323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.211335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.211650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.211662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.211961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.211972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.212191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.212201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.212507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.212519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.212634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.212644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.212924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.212936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.213142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.213153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.213461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.213473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.213665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.213676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.213988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.214000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.214179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.214192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.214515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.214526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.214839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.214851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.215183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.215196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.215513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.215524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.215858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.215874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.216182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.216193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.216395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.216406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.216727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.216739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.217072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.217083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.217394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.217405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.591 [2024-11-20 07:31:04.217734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.591 [2024-11-20 07:31:04.217745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.591 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.218043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.218055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.218376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.218388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.218721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.218733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.219050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.219061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.219390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.219402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.219606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.219617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.219906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.219917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.220228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.220240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.220572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.220583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.220888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.220900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.221227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.221239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.221438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.221449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.221785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.221796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.222103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.222114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.222431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.222443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.222750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.222764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.223093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.223105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.223290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.223301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.223627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.223639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.223941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.223952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.224114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.224125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.224452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.224465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.224779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.224790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.225134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.225147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.225319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.225331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.225673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.225686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.226030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.226042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.226365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.226377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.226686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.226697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.226878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.226889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.227182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.227194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.227384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.227395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.227685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.227697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.228013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.228024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.228321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.228333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.228664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.228675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.229009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.592 [2024-11-20 07:31:04.229021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.592 qpair failed and we were unable to recover it. 00:30:29.592 [2024-11-20 07:31:04.229333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.229344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.229648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.229660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.229857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.229874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.230159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.230171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.230463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.230475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.230782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.230793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.230978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.230990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.231304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.231315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.231636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.231648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.231978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.231990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.232295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.232305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.232611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.232622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.232964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.232976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.233276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.233286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.233617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.233629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.233966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.233978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.234288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.234301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.234504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.234515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.234786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.234798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.235143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.235157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.235489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.235501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.235839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.235850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.236095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.236108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.236294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.236306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.236510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.236522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.236818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.236833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.237167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.237178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.237490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.237501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.237859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.237877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.238211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.238224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.238496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.238507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.238808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.238820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.239018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.239030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.239351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.239362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.239660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.239672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.239952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.239963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.240288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.240300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.593 [2024-11-20 07:31:04.240590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.593 [2024-11-20 07:31:04.240602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.593 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.240936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.240949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.241276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.241287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.241481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.241492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.241792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.241803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.242093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.242106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.242451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.242463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.242775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.242788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.243098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.243110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.243409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.243423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.243633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.243644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.243976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.243988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.244309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.244320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.244625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.244637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.244945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.244957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.245264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.245275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.245546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.245557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.245858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.245874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.246080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.246091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.246405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.246416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.246727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.246739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.246921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.246932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.247142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.247153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.247324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.247335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.247626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.247637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.247939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.247951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.248254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.248265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.248572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.248584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.248887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.248898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.249095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.249106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.249402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.594 [2024-11-20 07:31:04.249413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.594 qpair failed and we were unable to recover it. 00:30:29.594 [2024-11-20 07:31:04.249774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.249785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.250052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.250062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.250358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.250369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.250708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.250720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.251031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.251043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.251381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.251396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.251704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.251715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.252028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.252040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.252381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.252392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.252698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.252710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.252947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.252958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.253290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.253301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.253604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.253615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.253835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.253846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.254147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.254159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.254346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.254356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.254532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.254543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.254877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.254889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.255364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.255383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.255699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.255712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.256041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.256054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.256360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.256371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.256605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.256616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.256932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.256945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.257258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.257270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.257595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.257607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.257831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.257843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.258168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.258180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.258508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.258520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.258849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.258860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.259195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.259208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.259535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.259549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.259833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.595 [2024-11-20 07:31:04.259845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.595 qpair failed and we were unable to recover it. 00:30:29.595 [2024-11-20 07:31:04.260164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.260177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.260539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.260551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.260731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.260744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.260953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.260965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.261271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.261283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.261566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.261578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.261882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.261895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.262171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.262182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.262507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.262518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.262811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.262822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.263117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.263128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.263937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.263961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.264275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.264287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.264597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.264608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.265074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.265089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.265407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.265419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.265720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.265731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.266043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.266056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.266366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.266378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.266704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.266715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.267039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.267051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.267382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.267393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.267718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.267730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.268045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.268056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.268234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.268245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.268539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.268552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.268881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.268895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.269198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.269209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.269508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.269520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.269838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.269849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.270178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.270190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.270351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.270363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.270638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.270650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.270957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.270970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.271286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.271298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.271489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-20 07:31:04.271501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.596 qpair failed and we were unable to recover it. 00:30:29.596 [2024-11-20 07:31:04.271782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.271793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.271951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.271964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.272272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.272284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.272603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.272614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.272929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.272945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.273264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.273275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.273615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.273626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.273965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.273977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.274267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.274278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.274585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.274597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.274882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.274896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.275195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.275206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.275498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.275510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.275877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.275889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.276183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.276194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.276494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.276505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.276886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.276899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.277205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.277216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.277525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.277536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.277846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.277858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.278201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.278213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.278487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.278498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.278824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.278835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.279152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.279165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.279491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.279502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.279832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.279843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.280185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.280198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.280395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.280407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.280727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.280738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.281079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.281090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.281377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.281388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.281700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.281715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.282031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.282043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.282320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.282333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.282638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.282649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.282963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.282975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.283314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.283325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.597 [2024-11-20 07:31:04.283539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-20 07:31:04.283550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.597 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.283847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.283858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.284166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.284178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.284484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.284495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.284788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.284800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.285109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.285120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.285436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.285448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.285783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.285795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.286171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.286183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.286485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.286497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.286778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.286790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.287136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.287149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.287457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.287468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.287798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.287809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.288097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.288108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.288468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.288478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.288784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.288796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.289099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.289111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.289273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.289284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.289590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.289603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.289878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.289891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.290217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.290231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.290525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.290538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.290847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.290858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.291180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.291192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.291527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.291538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.291842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.291854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.292154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.292166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.292301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.292313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.292572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.292584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.292915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.292927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.293233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.293244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.293608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.293620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.293995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.294006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.294338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.294350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.294646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.294656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.294939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.294950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.295231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-20 07:31:04.295242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.598 qpair failed and we were unable to recover it. 00:30:29.598 [2024-11-20 07:31:04.295574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.295585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.295877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.295889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.296199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.296210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.296517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.296529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.296838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.296850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.297153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.297164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.297491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.297503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.297810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.297822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.298158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.298171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.298497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.298509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.298838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.298850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.299201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.299213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.299525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.299538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.299847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.299859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.300193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.300205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.300560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.300572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.300887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.300900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.301085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.301097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.301396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.301408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.301700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.301711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.302014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.302034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.302359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.302370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.302679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.302691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.303026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.303037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.303330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.303344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.303661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.303672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.304010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.304021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.304352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.304363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.304665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.304676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.304983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.304995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.305313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.305325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.305639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.305650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.305864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.305876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.306170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.306181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.306498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.306510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.306875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.306887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.307210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.307222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.307536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.599 [2024-11-20 07:31:04.307547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.599 qpair failed and we were unable to recover it. 00:30:29.599 [2024-11-20 07:31:04.308048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.308066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.308376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.308389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.308689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.308701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.309006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.309019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.309326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.309337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.309646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.309658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.309987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.309999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.310331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.310342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.310643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.310654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.310872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.310884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.311201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.311213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.311518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.311529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.311714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.311727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.312026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.312042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.312359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.312370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.312654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.312666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.312984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.312996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.313310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.313321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.313658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.313669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.313999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.314011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.314316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.314328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.314655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.314666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.314974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.314986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.315313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.315324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.315608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.315619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.315928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.315940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.316237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.316247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.316561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.316572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.316869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.316882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.317098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.317109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.600 [2024-11-20 07:31:04.317285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.600 [2024-11-20 07:31:04.317296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.600 qpair failed and we were unable to recover it. 00:30:29.601 [2024-11-20 07:31:04.317539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.601 [2024-11-20 07:31:04.317551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.601 qpair failed and we were unable to recover it. 00:30:29.601 [2024-11-20 07:31:04.317850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.601 [2024-11-20 07:31:04.317864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.601 qpair failed and we were unable to recover it. 00:30:29.601 [2024-11-20 07:31:04.318178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.601 [2024-11-20 07:31:04.318190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.601 qpair failed and we were unable to recover it. 00:30:29.601 [2024-11-20 07:31:04.318484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.601 [2024-11-20 07:31:04.318495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.601 qpair failed and we were unable to recover it. 00:30:29.601 [2024-11-20 07:31:04.318799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.601 [2024-11-20 07:31:04.318810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.601 qpair failed and we were unable to recover it. 00:30:29.601 [2024-11-20 07:31:04.319154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.601 [2024-11-20 07:31:04.319165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.601 qpair failed and we were unable to recover it. 00:30:29.601 [2024-11-20 07:31:04.319475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.601 [2024-11-20 07:31:04.319488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.601 qpair failed and we were unable to recover it. 00:30:29.601 [2024-11-20 07:31:04.319795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.601 [2024-11-20 07:31:04.319806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.601 qpair failed and we were unable to recover it. 00:30:29.601 [2024-11-20 07:31:04.320140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.601 [2024-11-20 07:31:04.320152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.601 qpair failed and we were unable to recover it. 00:30:29.601 [2024-11-20 07:31:04.320457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.601 [2024-11-20 07:31:04.320470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.601 qpair failed and we were unable to recover it. 00:30:29.601 [2024-11-20 07:31:04.320799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.601 [2024-11-20 07:31:04.320810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.601 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.321100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.321112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.321440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.321453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.321778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.321791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.322106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.322118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.322427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.322439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.322769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.322781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.323086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.323098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.323415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.323425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.323792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.323803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.324091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.324102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.324462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.324474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.324782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.324796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.325115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.325127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.325421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.325432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.325654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.325666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.326000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.326011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.326352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.326364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.326669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.326680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.327016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.327029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.327361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.327373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.327698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.327710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.327886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.327897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.328196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.328208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.328508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.328520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.328814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.328826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.329034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.329048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.329369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.329381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.329714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.329726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.876 [2024-11-20 07:31:04.330023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.876 [2024-11-20 07:31:04.330035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.876 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.330272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.330283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.330610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.330621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.330920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.330933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.331171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.331183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.331507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.331519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.331859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.331873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.332185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.332196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.332543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.332554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.332868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.332880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.333177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.333188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.333404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.333414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.333742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.333753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.334144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.334156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.334489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.334500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.334751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.334761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.335078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.335089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.335419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.335430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.335779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.335790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.336101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.336113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.336445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.336457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.336758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.336770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.337096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.337108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.337338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.337350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.337647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.337659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.337872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.337885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.338184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.338195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.338497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.338509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.338808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.338819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.339155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.339167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.339481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.339492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.339795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.339805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.340117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.340130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.340383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.340394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.340694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.340706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.341035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.341047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.341373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.341384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.341691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.877 [2024-11-20 07:31:04.341702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.877 qpair failed and we were unable to recover it. 00:30:29.877 [2024-11-20 07:31:04.341992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.342006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.342337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.342348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.342665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.342676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.343012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.343024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.343244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.343254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.343579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.343590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.343901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.343913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.344234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.344246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.344543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.344554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.344846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.344858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.345155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.345167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.345511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.345523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.345854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.345869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.346200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.346212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.346542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.346553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.346852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.346872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.347196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.347207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.347512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.347524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.347854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.347870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.348193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.348205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.348509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.348520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.348850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.348865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.349166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.349178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.349477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.349488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.349820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.349832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.350204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.350216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.350521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.350533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.350859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.350877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.351176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.351187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.351501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.351512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.351844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.351855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.352172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.352183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.352500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.352511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.878 [2024-11-20 07:31:04.352842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.878 [2024-11-20 07:31:04.352854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.878 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.353205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.353217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.353522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.353535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.353734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.353744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.354062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.354074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.354381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.354391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.354718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.354729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.355059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.355072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.355385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.355396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.355697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.355709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.356012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.356023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.356316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.356327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.356659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.356670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.356977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.356996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.357323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.357334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.357663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.357675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.358004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.358016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.358327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.358338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.358623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.358634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.358832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.358842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.359110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.359121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.359420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.359433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.359751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.359762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.360050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.360061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.360363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.360375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.360683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.360694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.361022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.361034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.361226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.361243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.361530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.361542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.361880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.361891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.362199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.362210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.362509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.362521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.362814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.362826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.363127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.363139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.363471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.363482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.363808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.879 [2024-11-20 07:31:04.363821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.879 qpair failed and we were unable to recover it. 00:30:29.879 [2024-11-20 07:31:04.364147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.364158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.364496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.364507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.364813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.364825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.365141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.365152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.365418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.365429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.365702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.365713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.366015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.366026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.366334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.366345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.366639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.366651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.366926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.366938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.367254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.367265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.367565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.367578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.367773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.367785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.368112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.368123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.368425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.368436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.368718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.368730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.369060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.369072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.369373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.369384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.369715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.369726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.370031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.370044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.370364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.370375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.370737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.370748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.371081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.371094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.371405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.371417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.371748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.371759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.372091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.372103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.372460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.372473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.372800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.372811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.373115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.373126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.373474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.373484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.373780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.373792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.374105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.374117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.374266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.374278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.374599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.374612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.374926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.374937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.375131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.880 [2024-11-20 07:31:04.375142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.880 qpair failed and we were unable to recover it. 00:30:29.880 [2024-11-20 07:31:04.375326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.375337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.375644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.375655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.375968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.375980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.376306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.376317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.376685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.376696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.376992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.377003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.377335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.377346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.377657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.377668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.378007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.378019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.378358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.378369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.378545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.378556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.378882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.378894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.379229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.379240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.379455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.379466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.379731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.379742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.380076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.380088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.380415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.380427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.380733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.380747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.381076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.381088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.381390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.381402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.381770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.381782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.382095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.382108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.382413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.382426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.382755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.382767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.383077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.383089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.383387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.383400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.383702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.383714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.384062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.881 [2024-11-20 07:31:04.384074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.881 qpair failed and we were unable to recover it. 00:30:29.881 [2024-11-20 07:31:04.384401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.384413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.384697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.384709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.385031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.385044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.385355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.385368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.385661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.385674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.386002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.386015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.386203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.386216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.386538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.386549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.386876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.386888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.387221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.387232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.387525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.387536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.387873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.387885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.388224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.388236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.388558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.388569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.388847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.388859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.389071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.389082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.389379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.389393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.389730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.389741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.389977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.389988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.390306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.390317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.390646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.390657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.390963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.390974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.391277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.391288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.391610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.391621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.391914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.391926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.392256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.392268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.392598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.392609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.392902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.392914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.393222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.393232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.393520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.393532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.393847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.393858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.394030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.394041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.394346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.394357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.394662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.394674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.394983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.394995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.395340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.395352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.395661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.395672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.396005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.396017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.396298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.396309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.396588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.396599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.396929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.396941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.397238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.397249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.397559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.397570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.397900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.397912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.398221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.398232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.398525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.398536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.398836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.398848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.399187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.399198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.399536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.399547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.399853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.399870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.400183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.400194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.400499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.400511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.400847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.400858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.401188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.401200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.401508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.401519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.401731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.401742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.401910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.401921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.402194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.402205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.402511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.402522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.882 qpair failed and we were unable to recover it. 00:30:29.882 [2024-11-20 07:31:04.402859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.882 [2024-11-20 07:31:04.402876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.403162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.403173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.403445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.403456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.403772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.403783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.404085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.404096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.404410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.404421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.404752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.404764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.405097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.405109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.405436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.405447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.405777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.405789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.406148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.406161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.406464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.406476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.406786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.406798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.407019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.407032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.407345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.407357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.407640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.407652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.407961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.407973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.408273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.408285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.408585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.408595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.408895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.408907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.409221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.409232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.409514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.409526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.409825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.409837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.410157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.410169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.410496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.410508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.410800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.410814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.411138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.411150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.411483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.411495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.411846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.411857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.412168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.412180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.412517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.412528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.412827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.412838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.413168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.413179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.413536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.413546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.413743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.413753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.414065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.414077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.414389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.414400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.414708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.414719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.415032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.415045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.415209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.415222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.415530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.415542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.415917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.415928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.416271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.416283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.416585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.416597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.416905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.416918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.417223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.417234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.417533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.417545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.883 [2024-11-20 07:31:04.417876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.883 [2024-11-20 07:31:04.417888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.883 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.418178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.418190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.418494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.418505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.418809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.418822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.419137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.419149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.419490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.419504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.419785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.419795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.420106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.420118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.420422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.420434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.420736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.420748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.421048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.421059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.421311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.421322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.421652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.421663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.422015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.422027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.422346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.422358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.422666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.422676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.422958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.422969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.423270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.423281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.423558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.423570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.423904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.423916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.424213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.424225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.424540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.424551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.424892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.424904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.425229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.425240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.425619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.425629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.425952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.425965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.426284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.426296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.426621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.426633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.426969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.426981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.427285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.427296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.427588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.427599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.427916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.427929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.428251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.428262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.428478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.428489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.428793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.428804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.884 [2024-11-20 07:31:04.429130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.884 [2024-11-20 07:31:04.429142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.884 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.429502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.429513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.429839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.429849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.430169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.430180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.430511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.430522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.430818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.430829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.431229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.431241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.431549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.431561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.431842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.431854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.432155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.432167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.432460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.432472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.432805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.432818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.433114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.433127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.433340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.433352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.433653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.433664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.433992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.434004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.434317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.434328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.434656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.434668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.434970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.434982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.435307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.435318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.435512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.435524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.435808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.435820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.436134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.436145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.436462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.436473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.436770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.436781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.437109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.437121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.437451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.437463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.437789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.437801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.438132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.438143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.438421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.438433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.438740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.438752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.439035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.439047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.885 [2024-11-20 07:31:04.439377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.885 [2024-11-20 07:31:04.439388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.885 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.439717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.439729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.440028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.440039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.440330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.440342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.440667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.440678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.441001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.441014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.441298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.441312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.441549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.441560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.441871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.441883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.442093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.442106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.442386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.442397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.442729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.442741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.442939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.442952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.443243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.443255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.443489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.443501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.443759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.443772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.444085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.444098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.444397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.444409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.444714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.444726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.445040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.445052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.445380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.445393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.445693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.445705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.446036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.446049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.446363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.446375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.446683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.446696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.447004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.447017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.447231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.447243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.447420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.447433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.447719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.447732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.447934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.447948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.448282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.448294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.448603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.448616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.448929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.448942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.449262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.449277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.449605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.449618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.449919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.449932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.450294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.450306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.450603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.450615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.886 qpair failed and we were unable to recover it. 00:30:29.886 [2024-11-20 07:31:04.450807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.886 [2024-11-20 07:31:04.450819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.451210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.451223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.451430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.451442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.451761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.451773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.451901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.451915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.452229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.452242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.452451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.452463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.452694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.452707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.453011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.453024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.453350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.453363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.453694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.453706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.453899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.453913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.454316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.454328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.454648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.454660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.454890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.454903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.455190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.455202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.455532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.455544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.455893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.455906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.456284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.456296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.456629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.456642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.456950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.456962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.457280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.457292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.457629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.457644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.457966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.457979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.458290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.458303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.458626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.458638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.458966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.458979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.459294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.459306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.459613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.459625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.459939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.459952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.460255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.460267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.460557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.460570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.460877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.460890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.461214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.461226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.461560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.461572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.461664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.461674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.462034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.462047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.462322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.887 [2024-11-20 07:31:04.462334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.887 qpair failed and we were unable to recover it. 00:30:29.887 [2024-11-20 07:31:04.462632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.462644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.462960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.462973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.463276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.463289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.463634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.463646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.463836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.463848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.464130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.464142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.464366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.464377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.464689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.464700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.464986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.464997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.465306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.465318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.465521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.465532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.465855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.465872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.466045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.466057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.466377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.466388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.466719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.466731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.467050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.467062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.467272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.467283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.467353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.467364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.467671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.467682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.468008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.468020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.468346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.468358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.468634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.468645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.468956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.468969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.469336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.469347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.469521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.469531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.469739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.469750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.470083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.470095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.470394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.470406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.470717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.470728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.471030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.471042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.471360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.471371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.471682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.471695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.471939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.471951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.472267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.472279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.472587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.472599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.472933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.472946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.473296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.473307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.473612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.473624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.888 qpair failed and we were unable to recover it. 00:30:29.888 [2024-11-20 07:31:04.473970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.888 [2024-11-20 07:31:04.473981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.474158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.474169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.474360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.474372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.474735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.474746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.475038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.475049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.475354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.475365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.475677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.475689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.476025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.476036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.476219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.476230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.476428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.476439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.476653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.476664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.476981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.476993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.477178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.477189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.477477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.477489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.477793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.477806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.478096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.478107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.478400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.478412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.478739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.478750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.479068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.479081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.479366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.479377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.479676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.479688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.479948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.479960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.480261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.480272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.480570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.480581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.480786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.480796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.481028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.481041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.481311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.481323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.481637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.481648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.481987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.481999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.482346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.482357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.482443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.482452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.482939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.483029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.483487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.483525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.889 qpair failed and we were unable to recover it. 00:30:29.889 [2024-11-20 07:31:04.483764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.889 [2024-11-20 07:31:04.483801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.484151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.484163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.484509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.484520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.484841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.484853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.485178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.485190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.485365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.485375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.485579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.485590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.485763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.485776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.486075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.486088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.486363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.486374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.486661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.486673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.486981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.486993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.487311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.487322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.487526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.487537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.487834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.487845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.488163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.488175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.488476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.488488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.488776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.488788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.488967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.488978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.489314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.489325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.489509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.489521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.489843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.489855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.490176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.490189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.490562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.490574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.490893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.490906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.491088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.491100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.491399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.491410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.491742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.491753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.492039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.492051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.492363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.492374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.492691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.492703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.493017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.493029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.493360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.493371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.493660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.493671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.493964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.493975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.494145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.494161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.494552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.494563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.494889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.890 [2024-11-20 07:31:04.494902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.890 qpair failed and we were unable to recover it. 00:30:29.890 [2024-11-20 07:31:04.495203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.495214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.495548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.495560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.495861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.495877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.496167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.496178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.496489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.496500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.496676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.496687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.496949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.496962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.497287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.497299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.497614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.497626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.497807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.497819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.498156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.498168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.498338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.498348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.498609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.498621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.498827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.498839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.499268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.499279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.499594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.499606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.499927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.499939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.500255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.500266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.500561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.500574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.500880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.500893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.501059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.501071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.501413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.501425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.501617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.501628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.501943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.501954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.502286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.502297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.502625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.502637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.502939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.502958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.503315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.503326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.503651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.503662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.503850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.503866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.504220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.504231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.504544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.504556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.504750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.504762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.505062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.505073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.505268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.505279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.505462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.505472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.505799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.505811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.506111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.891 [2024-11-20 07:31:04.506122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-11-20 07:31:04.506431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.506443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.506767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.506779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.506982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.506994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.507325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.507336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.507639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.507651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.507974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.507986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.508223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.508234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.508431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.508442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.508773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.508786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.509007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.509018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.509363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.509375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.509691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.509702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.510009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.510020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.510344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.510356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.510702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.510714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.511009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.511022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.511263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.511275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.511575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.511587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.511892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.511904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.512198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.512210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.512537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.512548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.512879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.512892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.513099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.513110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.513335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.513346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.513541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.513552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.513839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.513851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.514164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.514177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.514488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.514501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.514840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.514852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.515163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.515175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.515477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.515489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.515652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.515664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.515979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.515991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.516164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.516174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.516477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.516488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.516795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.516807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.517119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.892 [2024-11-20 07:31:04.517131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-11-20 07:31:04.517390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.517401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.517633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.517644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.517838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.517850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.518187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.518198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.518526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.518537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.518840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.518850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.519157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.519168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.519464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.519476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.519782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.519792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.520131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.520142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.520471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.520483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.520790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.520801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.521114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.521126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.521423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.521434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.521632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.521643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.521957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.521968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.522285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.522296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.522489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.522503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.522826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.522838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.523158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.523169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.523487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.523507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.523828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.523838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.524114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.524136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.524453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.524464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.524793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.524804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.525115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.525127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.525444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.525455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.525754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.525766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.526087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.526099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.526305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.526315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.526618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.526629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.526916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.526927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.527251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.527262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.527594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.527606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.527912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.527923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.528230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.528242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.528553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.528565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.528754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.893 [2024-11-20 07:31:04.528766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-11-20 07:31:04.529077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.529089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.529369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.529380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.529686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.529698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.530033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.530044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.530330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.530342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.530637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.530648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.530977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.530989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.531282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.531293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.531584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.531595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.531894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.531906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.532244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.532255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.532559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.532570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.532769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.532781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.533098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.533111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.533315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.533326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.533565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.533575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.533781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.533792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.534118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.534130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.534437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.534448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.534765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.534776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.535103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.535115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.535377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.535388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.535719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.535730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.536042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.536054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.536428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.536439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.536734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.536745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.537083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.537094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.537420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.537431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.537763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.537775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.538101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.538114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.538421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.538432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.538720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.538732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.539031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.894 [2024-11-20 07:31:04.539042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.894 qpair failed and we were unable to recover it. 00:30:29.894 [2024-11-20 07:31:04.539358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.539369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.539704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.539715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.540025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.540038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.540414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.540425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.540760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.540770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.541099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.541112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.541438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.541449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.541784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.541795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.542045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.542057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.542358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.542370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.542709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.542720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.543018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.543029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.543330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.543341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.543560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.543570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.543908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.543922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.544222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.544234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.544516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.544527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.544827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.544839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.545143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.545154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.545455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.545467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.545797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.545808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.546081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.546092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.546423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.546435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.546736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.546747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.547031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.547042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.547219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.547231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.547492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.547504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.547830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.547842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.548198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.548210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.548517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.548528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.548779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.548790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.549080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.549092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.549399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.549410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.549716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.549728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.550037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.550048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.550336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.550348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.550661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.550672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.550982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.895 [2024-11-20 07:31:04.550995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.895 qpair failed and we were unable to recover it. 00:30:29.895 [2024-11-20 07:31:04.551318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.551328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.551640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.551652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.551987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.551998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.552276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.552291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.552601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.552612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.552947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.552959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.553289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.553301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.553600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.553612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.553950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.553962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.554294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.554305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.554654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.554666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.554931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.554943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.555155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.555165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.555472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.555484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.555814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.555826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.556123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.556134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.556435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.556447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.556781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.556792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.557116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.557128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.557433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.557444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.557778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.557790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.558104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.558116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.558426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.558437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.558718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.558729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.559062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.559073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.559385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.559396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.559701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.559712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.560016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.560028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.560363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.560374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.560659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.560670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.561008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.561021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.561347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.561359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.561673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.561684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.561999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.562012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.562316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.562327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.562517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.562528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.562834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.562844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.563137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.896 [2024-11-20 07:31:04.563149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.896 qpair failed and we were unable to recover it. 00:30:29.896 [2024-11-20 07:31:04.563417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.563428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.563687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.563698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.564005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.564016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.564299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.564311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.564682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.564693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.565000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.565012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.565354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.565366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.565673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.565685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.565985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.565996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.566283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.566294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.566567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.566578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.566975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.566987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.567285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.567297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.567643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.567655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.567972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.567983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.568317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.568328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.568531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.568543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.568810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.568821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.569153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.569165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.569506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.569517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.569850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.569861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.570205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.570217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.570529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.570540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.570840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.570852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.571156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.571167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.571478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.571490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.571840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.571851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.572180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.572192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.572519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.572530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.572823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.572835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.573167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.573179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.573482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.573493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.573805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.573817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.574146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.574158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.574469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.574481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.574788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.574799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.575110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.575122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.575428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.897 [2024-11-20 07:31:04.575439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.897 qpair failed and we were unable to recover it. 00:30:29.897 [2024-11-20 07:31:04.575766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.575778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.576168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.576180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.576530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.576543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.576855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.576871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.577210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.577222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.577546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.577558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.577872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.577884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.578196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.578207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.578531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.578542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.578872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.578884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.579266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.579277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.579577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.579588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.579888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.579900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.580093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.580105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.580397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.580408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.580729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.580740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.581075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.581088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.581395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.581407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.585285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.585377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.585787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.585824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.586241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.586274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.586607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.586619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.587070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.587114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.587436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.587450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.587771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.587783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.588092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.588104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.588428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.588440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.588759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.588771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.589071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.589083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.589412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.589423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.589728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.589740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.590072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.590084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.590383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.590395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.590726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.590737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.590907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.590920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.591212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.591223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.591553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.591565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.591898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.898 [2024-11-20 07:31:04.591911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.898 qpair failed and we were unable to recover it. 00:30:29.898 [2024-11-20 07:31:04.592229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.592240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.592549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.592560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.592739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.592751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.593086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.593098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.593298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.593309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.593616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.593628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.593962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.593974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.594284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.594296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.594631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.594642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.594926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.594937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.595255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.595266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.595562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.595577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.595873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.595884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.596213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.596225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.596496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.596507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.596818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.596830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.597133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.597144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.597429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.597440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.597671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.597683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.598002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.598014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.598349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.598361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.598699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.598711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.599020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.599033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.599321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.599333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.599666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.599677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.599989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.600002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.600309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.600320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.600627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.600639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.600805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.600817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.601124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.601137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.601442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.601454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.601789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.601800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.602123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.602135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.899 [2024-11-20 07:31:04.602463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.899 [2024-11-20 07:31:04.602474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.899 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.602779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.602791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.603074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.603086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.603394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.603406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.603732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.603743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.604075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.604087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.604397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.604408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.604706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.604718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.605032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.605044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.605356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.605368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.605668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.605680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.605900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.605912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.606106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.606118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.606382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.606393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.606746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.606757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.607075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.607087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.607415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.607426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.607705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.607718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.608018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.608029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.608372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.608383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.608725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.608737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.608938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.608951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.609259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.609271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.609612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.609623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.609916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.609928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.610151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.610162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.610475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.610487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.610815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.610827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.611221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.611233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.611561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.611573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.611906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.611918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.612243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.612254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.612553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.612564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.612875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.612888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.613156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.613167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.613480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.613492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.613819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.613830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.614183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.614195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.900 [2024-11-20 07:31:04.614462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.900 [2024-11-20 07:31:04.614473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.900 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.614776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.614787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.615004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.615015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.615334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.615346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.615648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.615659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.615989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.616000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.616315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.616326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.616655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.616667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.616975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.616989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.617294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.617306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.617640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.617651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.617999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.618010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.618340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.618352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.618679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.618691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.618883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.618897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.619198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.619209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.619524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.619535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.619906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.619918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.620222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.620243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.620561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.620572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.620875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.620886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.621081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.621094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.621410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.621422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.621610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.621622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.621935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.621946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.622276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.622288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.622614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.622626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.622927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.622939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.623134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.623145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.623450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.623461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.623776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.623788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.623995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.624006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.624297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.624308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.624637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.624648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.624988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.624999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.625203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.625219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.625528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.625540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.625870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.901 [2024-11-20 07:31:04.625881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.901 qpair failed and we were unable to recover it. 00:30:29.901 [2024-11-20 07:31:04.626162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.902 [2024-11-20 07:31:04.626172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.902 qpair failed and we were unable to recover it. 00:30:29.902 [2024-11-20 07:31:04.626489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.902 [2024-11-20 07:31:04.626500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:29.902 qpair failed and we were unable to recover it. 00:30:30.182 [2024-11-20 07:31:04.626806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-11-20 07:31:04.626818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-11-20 07:31:04.627023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-11-20 07:31:04.627036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-11-20 07:31:04.627408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-11-20 07:31:04.627421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-11-20 07:31:04.627749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-11-20 07:31:04.627760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-11-20 07:31:04.628088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-11-20 07:31:04.628100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.182 [2024-11-20 07:31:04.628448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.182 [2024-11-20 07:31:04.628460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.182 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.628665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.628676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.628896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.628908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.629230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.629242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.629575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.629587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.629891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.629903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.630222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.630233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.630532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.630543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.630867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.630880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.631179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.631190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.631496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.631507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.631841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.631852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.632179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.632192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.632518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.632530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.632806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.632818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.633112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.633124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.633453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.633465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.633791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.633805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.634103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.634115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.634442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.634454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.634779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.634791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.635103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.635115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.635426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.635439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.635744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.635756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.636077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.636090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.636474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.636486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.636795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.636808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.637143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.637156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.637457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.637469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.637761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.637773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.638102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.638115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.638426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.638438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.638764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.638776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.639082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.639095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.639365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.639378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.639683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.639695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.640026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.640039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.640343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.183 [2024-11-20 07:31:04.640355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.183 qpair failed and we were unable to recover it. 00:30:30.183 [2024-11-20 07:31:04.640660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.640672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.641016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.641028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.641337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.641349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.641655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.641667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.642006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.642019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.642319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.642331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.642634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.642646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.642976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.642989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.643314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.643325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.643632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.643644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.643938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.643949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.644278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.644289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.644583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.644594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.644923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.644934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.645237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.645249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.645538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.645549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.645870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.645881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.646211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.646222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.646529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.646541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.646833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.646844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.647151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.647163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.647353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.647365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.647657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.647668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.647877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.647889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.648183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.648194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.648534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.648545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.648874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.648887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.649058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.649070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.649386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.649399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.649725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.649736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.650035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.650047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.650330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.650341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.650683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.650695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.651004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.651015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.651209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.651222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.651546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.651558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.651886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.651899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.652214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.184 [2024-11-20 07:31:04.652225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.184 qpair failed and we were unable to recover it. 00:30:30.184 [2024-11-20 07:31:04.652529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.652541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.652848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.652859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.653178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.653189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.653502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.653513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.653765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.653776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.654078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.654091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.654419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.654430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.654783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.654795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.655095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.655107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.655424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.655438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.655751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.655763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.656126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.656137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.656435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.656447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.656776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.656788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.657082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.657094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.657397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.657409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.657735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.657748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.658036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.658049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.658380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.658393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.658701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.658713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.659028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.659040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.659336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.659348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.659654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.659665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.660011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.660022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.660354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.660365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.660664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.660676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.661022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.661034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.661367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.661378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.661683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.661703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.662028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.662039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.662404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.662416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.662707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.662719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.663003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.663014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.663353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.663365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.663553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.663566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.663876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.663889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.664212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.664227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.185 [2024-11-20 07:31:04.664551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.185 [2024-11-20 07:31:04.664563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.185 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.664907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.664919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.665228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.665240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.665564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.665576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.665918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.665932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.666144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.666155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.666463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.666474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.666805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.666817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.667124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.667136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.667444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.667456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.667787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.667799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.668113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.668126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.668434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.668446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.668784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.668795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.669134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.669146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.669473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.669485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.669822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.669834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.670139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.670151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.670480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.670492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.670831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.670843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.671149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.671161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.671380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.671393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.671727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.671739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.671921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.671934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.672211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.672223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.672586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.672597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.672900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.672913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.673274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.673285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.673618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.673630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.673891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.673903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.674229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.674240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.674577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.674588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.674906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.674918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.675232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.675244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.675610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.675621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.675926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.675937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.676246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.676257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.676588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.676599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.186 [2024-11-20 07:31:04.676907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.186 [2024-11-20 07:31:04.676918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.186 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.677228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.677238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.677432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.677444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.677728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.677739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.678076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.678088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.678456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.678468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.678769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.678781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.679094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.679105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.679402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.679413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.679718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.679729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.680046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.680057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.680372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.680383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.680684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.680695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.681008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.681020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.681288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.681299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.681607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.681618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.681927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.681939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.682267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.682278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.682588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.682599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.682892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.682904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.683229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.683240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.683542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.683554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.683896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.683908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.684208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.684219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.684519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.684530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.684736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.684746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.684935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.684948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.685074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.685086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.685388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.685399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.685672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.685687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.685989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.686001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.686285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.686297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.187 [2024-11-20 07:31:04.686630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.187 [2024-11-20 07:31:04.686641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.187 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.686890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.686901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.687186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.687197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.687531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.687542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.687840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.687851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.688159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.688170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.688928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.688951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.689265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.689278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.689586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.689598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.689929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.689942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.690275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.690286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.690584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.690596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.690903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.690915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.691154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.691165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.691472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.691483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.691923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.691935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.692278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.692290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.692493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.692504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.692802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.692814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.693108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.693119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.693338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.693349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.693509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.693521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.693886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.693899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.694234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.694245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.694582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.694597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.694786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.694798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.695050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.695062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.695367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.695379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.695650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.695663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.695875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.695888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.696114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.696126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.696378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.696389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.696611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.696623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.696933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.696945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.697127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.697138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.697348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.697359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.697565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.697576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.697875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.697887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.188 [2024-11-20 07:31:04.698171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.188 [2024-11-20 07:31:04.698182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.188 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.698460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.698471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.698786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.698797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.699116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.699127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.699459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.699470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.699759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.699769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.700131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.700143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.700325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.700335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.700667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.700678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.700911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.700922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.701237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.701248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.701551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.701563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.701890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.701902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.702194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.702207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.702425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.702436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.702735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.702747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.703064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.703075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.703410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.703422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.703751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.703763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.703966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.703978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.704265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.704276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.704607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.704620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.704928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.704940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.705232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.705242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.705551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.705562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.705749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.705760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.706020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.706031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.706374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.706386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.706722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.706733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.707037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.707049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.707265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.707276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.707466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.707476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.707807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.707818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.708113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.708125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.708425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.708436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.708761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.708772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.709077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.709089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.709415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.189 [2024-11-20 07:31:04.709427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.189 qpair failed and we were unable to recover it. 00:30:30.189 [2024-11-20 07:31:04.709618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.709630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.709923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.709934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.710261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.710272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.710486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.710497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.710827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.710838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.711143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.711155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.711458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.711469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.711784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.711795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.712067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.712079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.712393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.712405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.712734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.712746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.712941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.712953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.713239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.713249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.713573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.713585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.713783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.713795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.713956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.713969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.714259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.714271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.714488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.714500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.714809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.714822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.715143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.715156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.715473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.715485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.715639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.715651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.715849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.715865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.716172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.716184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.716497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.716509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.716850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.716872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.717115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.717127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.717436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.717448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.717834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.717846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.718175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.718187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.718508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.718520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.718907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.718919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.719188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.719199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.719507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.719519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.719696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.719708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.719909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.719921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.720231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.720243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.720573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.190 [2024-11-20 07:31:04.720585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 07:31:04.720919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.720931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.721277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.721290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.721589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.721600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.721907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.721918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.722258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.722269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.722600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.722614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.722922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.722933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.723264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.723275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.723585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.723596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.723899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.723910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.724209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.724220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.724552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.724563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.724770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.724781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.725077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.725089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.725391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.725403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.725752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.725764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.725960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.725971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.726282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.726292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.726619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.726631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.726965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.726976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.727272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.727283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.727647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.727658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.727950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.727962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.728287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.728299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.728474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.728484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.728655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.728665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.728888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.728900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.729242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.729253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.729318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.729327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.729594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.729604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.729800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.729810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.730095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.730106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.730269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.730282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.730575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.730586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.730760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.730771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.731057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.731069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.731396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.731408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.731719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.191 [2024-11-20 07:31:04.731730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 07:31:04.732029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.732041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.732351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.732362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.732646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.732666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.733002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.733014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.733200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.733212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.733526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.733537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.733830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.733842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.734026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.734038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.734334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.734346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.734567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.734578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.734891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.734903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.735128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.735140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.735327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.735339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.735516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.735528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.735833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.735845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.736038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.736051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.736229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.736240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.736558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.736570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.736887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.736899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.737226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.737237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.737618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.737629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.737955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.737967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.738382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.738393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.738700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.738720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.738988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.739000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.739318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.739331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.739662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.739672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.739995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.740006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.740170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.740183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.740504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.740514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.740740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.740751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.741067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.741079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.741396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.741407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 07:31:04.741740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.192 [2024-11-20 07:31:04.741751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.742140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.742152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.742513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.742524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.742828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.742840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.743107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.743119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.743394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.743405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.743719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.743731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.743925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.743936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.744255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.744266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.744440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.744451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.744627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.744639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.744939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.744950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.745267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.745279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.745491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.745502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.745813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.745825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.746137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.746149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.746471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.746483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.746812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.746823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.747128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.747139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.747478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.747490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.747821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.747831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.748149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.748161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.748358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.748368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.748758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.748769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.749057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.749069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.749393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.749404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.749809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.749821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.750147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.750158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.750347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.750359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.750544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.750558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.750744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.750756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.750996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.751007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.751323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.751333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.751628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.751640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.751953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.751964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.752263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.752273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.752556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.752568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.752887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.752898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.193 [2024-11-20 07:31:04.753138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.193 [2024-11-20 07:31:04.753148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.193 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.753451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.753462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.753776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.753788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.754112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.754123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.754318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.754328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.754717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.754728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.755098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.755110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.755433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.755446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.755757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.755769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.755958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.755970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.756146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.756157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.756447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.756458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.756744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.756755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.757039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.757051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.757368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.757380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.757707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.757719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.758054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.758066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.758254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.758265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.758559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.758572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.758876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.758887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.759181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.759193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.759495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.759506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.759800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.759811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.760112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.760124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.760412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.760424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.760727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.760737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.761101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.761112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.761401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.761411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.761750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.761761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.761988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.761998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.762324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.762335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.762656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.762668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.762963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.762975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.763288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.763300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.763643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.763654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.763961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.763973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.764140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.764152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.764475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.764487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.194 [2024-11-20 07:31:04.764769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.194 [2024-11-20 07:31:04.764780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.194 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.765091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.765104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.765437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.765448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.765739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.765751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.766075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.766086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.766447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.766459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.766768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.766780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.767079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.767092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.767387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.767399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.767642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.767653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.767985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.767997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.768300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.768311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.768493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.768505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.768812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.768824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.769114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.769125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.769423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.769435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.769630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.769642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.769933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.769944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.770281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.770293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.770517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.770529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.770848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.770860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.771256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.771268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.771590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.771602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.771921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.771933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.772280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.772291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.772598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.772611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.772933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.772944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.773261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.773273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.773601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.773612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.773914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.773927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.774252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.774263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.774590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.774601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.774895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.774907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.775239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.775250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.775441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.775453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.775763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.775774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.776099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.776110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.776420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.776431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.195 [2024-11-20 07:31:04.776822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.195 [2024-11-20 07:31:04.776834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.195 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.777138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.777149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.777456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.777468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.777796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.777808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.778110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.778123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.778451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.778463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.778634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.778647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.778937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.778949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.779280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.779292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.779628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.779639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.779939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.779953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.780274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.780285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.780588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.780600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.780972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.780984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.781299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.781311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.781616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.781627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.781930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.781941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.782127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.782138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.782445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.782457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.782746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.782757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.783062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.783074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.783392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.783403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.783707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.783719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.784027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.784038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.784347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.784359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.784645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.784657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.784852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.784868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.785144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.785156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.785446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.785458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.785774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.785784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.786099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.786112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.786293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.786306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.786614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.786625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.786955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.786967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.196 qpair failed and we were unable to recover it. 00:30:30.196 [2024-11-20 07:31:04.787325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.196 [2024-11-20 07:31:04.787336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.787646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.787657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.787977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.787989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.788278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.788292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.788612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.788623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.788925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.788938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.789255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.789266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.789595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.789607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.789908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.789920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.790243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.790254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.790565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.790576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.790919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.790931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.791323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.791334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.791635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.791648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.791974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.791986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.792286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.792297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.792610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.792622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.792923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.792935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.793255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.793266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.793578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.793590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.793938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.793952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.794246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.794256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.794556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.794567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.794891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.794904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.795219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.795230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.795529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.795541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.795748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.795759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.796017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.796029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.796354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.796365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.796668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.796680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.796870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.796883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.797088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.797099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.797378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.797389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.797695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.797707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.798009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.798021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.798353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.798365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.798693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.798705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.799025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.799037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.197 qpair failed and we were unable to recover it. 00:30:30.197 [2024-11-20 07:31:04.799339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.197 [2024-11-20 07:31:04.799351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.799683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.799694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.800002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.800015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.800348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.800359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.800665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.800676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.801058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.801069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.801373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.801385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.801687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.801698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.802013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.802025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.802348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.802359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.802658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.802669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.802867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.802879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.803173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.803185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.803496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.803507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.803852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.803875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.804166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.804177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.804483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.804494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.804775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.804786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.805179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.805193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.805496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.805507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.805805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.805815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.806103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.806113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.806464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.806474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.806695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.806706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.806917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.806928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.807121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.807131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.807295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.807306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.807613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.807624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.807928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.807938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.808265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.808275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.808593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.808604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.808893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.808903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.809253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.809263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.809569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.809583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.809902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.809913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.810207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.810217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.810449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.810459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.810673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.198 [2024-11-20 07:31:04.810684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.198 qpair failed and we were unable to recover it. 00:30:30.198 [2024-11-20 07:31:04.811039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.811050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.811374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.811384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.811666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.811676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.811962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.811973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.812317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.812327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.812633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.812644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.812923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.812934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.813273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.813283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.813577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.813590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.813778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.813790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.814096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.814107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.814450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.814460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.814740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.814750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.815120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.815131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.815433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.815443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.815737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.815747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.816041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.816051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.816264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.816275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.816544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.816554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.816890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.816901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.817197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.817207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.817518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.817527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.817724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.817736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.818019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.818030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.818360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.818369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.818699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.818708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.819002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.819012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.819325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.819336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.819652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.819662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.819981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.819992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.820207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.820217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.820535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.820544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.820895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.820906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.821098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.821108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.821381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.821391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.821734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.821743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.822039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.822050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.822354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.199 [2024-11-20 07:31:04.822364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.199 qpair failed and we were unable to recover it. 00:30:30.199 [2024-11-20 07:31:04.822647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.822657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.823030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.823040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.823262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.823272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.823572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.823582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.823858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.823873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.824168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.824177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.824383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.824393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.824701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.824711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.825012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.825022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.825321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.825331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.825621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.825631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.825915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.825927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.826224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.826233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.826519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.826529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.826850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.826859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.827184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.827195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.827510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.827519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.827851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.827865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.828181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.828190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.828535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.828544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.828828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.828838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.829166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.829177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.829509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.829518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.829847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.829858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.830199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.830210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.830470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.830479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.830788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.830797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.831074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.831085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.831422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.831432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.831713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.831723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.832075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.832085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.832380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.832390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.832682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.832692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.832998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.833009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.833334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.833344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.833687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.833697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.834011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.834021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.834361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.200 [2024-11-20 07:31:04.834371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.200 qpair failed and we were unable to recover it. 00:30:30.200 [2024-11-20 07:31:04.834707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.834719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.834923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.834934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.835220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.835229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.835541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.835551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.835888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.835898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.836233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.836243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.836587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.836597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.836785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.836794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.837133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.837144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.837473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.837482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.837766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.837776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.838003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.838013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.838316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.838326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.838643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.838653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.838938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.838948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.839286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.839296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.839600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.839610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.839922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.839933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.840267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.840277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.840453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.840464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.840677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.840686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.841065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.841076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.841412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.841422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.841591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.841601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.841883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.841894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.842244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.842254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.842583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.842592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.842798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.842808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.843100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.843111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.843426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.843436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.843772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.843781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.844086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.844096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.844385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.844395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.844714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.201 [2024-11-20 07:31:04.844724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.201 qpair failed and we were unable to recover it. 00:30:30.201 [2024-11-20 07:31:04.845042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.845053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.845255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.845265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.845567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.845577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.845915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.845925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.846270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.846280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.846563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.846573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.846884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.846894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.847233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.847246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.847445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.847454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.847770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.847779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.848090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.848100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.848428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.848438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.848764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.848774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.849080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.849090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.849430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.849440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.849764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.849774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.850140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.850151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.850431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.850441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.850760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.850770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.851095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.851106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.851447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.851456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.851678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.851687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.852083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.852093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.852401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.852410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.852728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.852738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.853041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.853052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.853392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.853401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.853741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.853751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.854038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.854048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.854342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.854352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.854628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.854637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.855019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.855029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.855370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.855380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.855711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.855721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.856036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.856048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.856388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.856397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.856734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.856744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.202 qpair failed and we were unable to recover it. 00:30:30.202 [2024-11-20 07:31:04.857020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.202 [2024-11-20 07:31:04.857030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.857361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.857371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.857716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.857726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.858013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.858023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.858365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.858375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.858588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.858598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.858912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.858922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.859147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.859156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.859467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.859477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.859788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.859798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.860141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.860151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.860515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.860525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.860799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.860808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.861017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.861028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.861229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.861239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.861553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.861562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.861836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.861846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.862157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.862167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.862365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.862375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.862697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.862707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.863024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.863034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.863314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.863324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.863665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.863675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.863993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.864003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.864306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.864321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.864649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.864659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.864983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.864994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.865306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.865316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.865629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.865639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.865929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.865939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.866147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.866157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.866378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.866388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.866652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.866662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.867023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.867033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.867372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.867382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.867513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.867524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.867912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.867922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.868142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.868151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.203 qpair failed and we were unable to recover it. 00:30:30.203 [2024-11-20 07:31:04.868485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.203 [2024-11-20 07:31:04.868495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.868836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.868846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.869158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.869169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.869344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.869354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.869699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.869709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.870022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.870032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.870336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.870346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.870633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.870643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.870950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.870961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.871156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.871166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.871328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.871339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.871419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.871429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.871765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.871775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.872069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.872079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.872412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.872422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.872704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.872713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.873000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.873010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.873222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.873232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.873520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.873529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.873735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.873745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.874089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.874099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.874444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.874454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.874717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.874727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.875056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.875066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.875407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.875416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.875743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.875752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.876032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.876042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.876375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.876385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.876577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.876587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.876883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.876893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.877188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.877198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.877521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.877530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.877678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.877688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.878002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.878012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.878334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.878344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.878662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.878672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.878990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.879001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.879333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.879342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.204 [2024-11-20 07:31:04.879615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.204 [2024-11-20 07:31:04.879624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.204 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.879954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.879964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.880251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.880261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.880592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.880602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.880899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.880909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.881206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.881216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.881568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.881578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.881918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.881929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.882237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.882247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.882599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.882608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.882909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.882919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.883199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.883208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.883509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.883519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.883799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.883809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.884141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.884151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.884443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.884453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.884735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.884747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.885075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.885086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.885375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.885385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.885675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.885685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.885969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.885980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.886324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.886334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.886666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.886675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.886959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.886969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.887242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.887251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.887549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.887558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.887871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.887881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.888172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.888182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.888468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.888477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.888772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.888782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.889056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.889066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.889406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.889415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.889743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.889753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.890094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.890104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.890313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.205 [2024-11-20 07:31:04.890322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.205 qpair failed and we were unable to recover it. 00:30:30.205 [2024-11-20 07:31:04.890513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.890523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.890898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.890909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.891232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.891242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.891578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.891588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.891870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.891881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.892245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.892255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.892537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.892546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.892831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.892841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.893032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.893045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.893335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.893345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.893719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.893729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.894068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.894079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.894396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.894406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.894746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.894756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.895069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.895079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.895362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.895372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.895717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.895726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.896106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.896116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.896415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.896424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.896673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.896683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.896997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.897007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.897339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.897349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.897665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.897675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.897837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.897847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.898169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.898184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.898513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.898524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.898860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.898876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.899227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.899237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.899523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.899533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.899870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.899881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.900198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.900208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.900530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.900539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.900823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.900833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.901196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.901207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.901543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.901553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.901839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.901851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.902191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.902201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.206 qpair failed and we were unable to recover it. 00:30:30.206 [2024-11-20 07:31:04.902532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.206 [2024-11-20 07:31:04.902541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.902882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.902892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.903110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.903120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.903408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.903417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.903745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.903755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.904071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.904081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.904405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.904415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.904742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.904751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.904990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.905001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.905318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.905328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.905621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.905631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.905823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.905835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.906186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.906197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.906496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.906506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.906831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.906841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.907253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.907263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.907596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.907606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.907934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.907945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.908290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.908300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.908648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.908658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.908932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.908942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.909264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.909274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.909562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.909571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.909849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.909859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.910210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.910220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.910506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.910515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.910805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.910815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.911133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.911144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.911491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.911501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.911833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.911843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.912168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.912178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.912389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.912398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.912789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.912799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.913167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.913177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.913519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.913529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.913907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.913917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.914212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.914222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.914527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.914537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.207 [2024-11-20 07:31:04.914824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.207 [2024-11-20 07:31:04.914834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.207 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.915201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.915213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.915560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.915570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.915735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.915745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.916031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.916041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.916342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.916351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.916617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.916627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.916938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.916948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.917275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.917285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.917456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.917466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.917773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.917783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.918076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.918086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.918429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.918438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.918721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.918731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.919035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.919045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.919336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.919346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.919634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.919644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.919962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.919973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.920241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.920251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.920520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.920530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.920798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.920808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.921154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.921164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.921507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.921517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.921806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.921816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.922170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.922180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.922490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.922501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.922813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.922823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.923115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.923125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.923442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.923454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.923778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.923788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.924113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.924124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.924428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.924438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.924757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.924767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.925049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.925060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.925380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.925390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.925739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.925749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.926085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.926095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.926420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.926429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.208 qpair failed and we were unable to recover it. 00:30:30.208 [2024-11-20 07:31:04.926746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.208 [2024-11-20 07:31:04.926756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-11-20 07:31:04.927081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-11-20 07:31:04.927091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.209 [2024-11-20 07:31:04.927410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.209 [2024-11-20 07:31:04.927420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.209 qpair failed and we were unable to recover it. 00:30:30.513 [2024-11-20 07:31:04.927738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.513 [2024-11-20 07:31:04.927749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.513 qpair failed and we were unable to recover it. 00:30:30.513 [2024-11-20 07:31:04.928097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.513 [2024-11-20 07:31:04.928108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.513 qpair failed and we were unable to recover it. 00:30:30.513 [2024-11-20 07:31:04.928424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.513 [2024-11-20 07:31:04.928433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.513 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.928746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.928756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.928980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.928991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.929177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.929187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.929505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.929515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.929845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.929855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.930191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.930202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.930552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.930562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.930878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.930889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.931223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.931233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.931531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.931541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.931936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.931947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.932255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.932267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.932604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.932613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.932894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.932904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.933276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.933287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.933603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.933614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.933925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.933935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.934244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.934253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.934535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.934544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.934891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.934901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.935217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.935226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.935423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.935433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.935750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.935760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.936084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.936095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.936271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.936282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.936719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.936729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.937027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.937037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.937327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.937336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.937675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.937684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.937967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.937978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.938318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.514 [2024-11-20 07:31:04.938329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-20 07:31:04.938649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.938659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.938975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.938985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.939304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.939314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.939623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.939633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.939937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.939947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.940310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.940320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.940594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.940603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.940790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.940802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.941125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.941136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.941465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.941475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.941855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.941870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.942152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.942162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.942491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.942501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.942849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.942859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.942987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.942997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.943230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.943240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.943526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.943536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.943880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.943890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.944118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.944128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.944477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.944486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.944829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.944839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.945209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.945220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.945505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.945515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.945714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.945724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.945940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.945951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.946313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.946323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.946631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.946640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.947036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.947047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.947361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.947371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.947711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.947721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.948041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.948051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.948228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.948239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-20 07:31:04.948502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.515 [2024-11-20 07:31:04.948511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.948831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.948840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.949151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.949161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.949455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.949465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.949845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.949855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.950192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.950203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.950492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.950502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.950810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.950820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.951113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.951124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.951488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.951498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.951796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.951806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.952180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.952191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.952430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.952440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.952761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.952771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.953058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.953069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.953294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.953304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.953632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.953644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.953839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.953849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.954080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.954090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.954422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.954432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.954769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.954779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.955078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.955089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.955435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.955445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.955731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.955741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.956122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.956132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.956483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.956493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.956847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.956857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.957160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.957170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.957511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.957521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.957842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.957853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.958178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.958189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-20 07:31:04.958501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.516 [2024-11-20 07:31:04.958511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.958723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.958733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.958960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.958972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.959289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.959299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.959638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.959647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.959964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.959975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.960268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.960278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.960576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.960586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.960802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.960811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.961145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.961156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.961472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.961482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.961811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.961821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.962166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.962179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.962361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.962371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.962683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.962701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.963009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.963020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.963325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.963335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.963612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.963623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.963940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.963950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.964238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.964247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.964609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.964620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.964828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.964838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.965165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.965176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.965438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.965448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.965762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.965771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.966064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.966074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.966383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.966393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.966733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.966743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.967044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.967054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.967357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.967367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.967568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.967577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.967921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.967931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.968282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.968291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.517 [2024-11-20 07:31:04.968658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.517 [2024-11-20 07:31:04.968668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.517 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.969036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.969047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.969216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.969226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.969468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.969479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.969689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.969699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.969973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.969983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.970198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.970208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.970588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.970598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.970924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.970935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.971276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.971286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.971455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.971466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.971789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.971799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.972004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.972014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.972355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.972365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.972544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.972555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.972865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.972876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.973100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.973110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.973452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.973462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.973686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.973696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.974024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.974035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.974366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.974376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.974763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.974773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.974945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.974956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.975151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.975161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.975490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.975501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.975790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.975801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.975993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.976004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.976184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.976193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.976471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.976481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.976822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.976831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.977107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.518 [2024-11-20 07:31:04.977117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.518 qpair failed and we were unable to recover it. 00:30:30.518 [2024-11-20 07:31:04.977455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.977465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.977776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.977786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.978092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.978103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.978441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.978451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.978794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.978804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.979089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.979100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.979271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.979281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.979593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.979603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.979917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.979927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.980121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.980132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.980434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.980444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.980804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.980814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.981104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.981115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.981424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.981434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.981839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.981849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.982214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.982226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.982541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.982552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.982737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.982747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.983057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.983067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.983445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.983455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.983771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.983781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.984077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.984087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.984393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.984403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.984721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.984732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.985049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.519 [2024-11-20 07:31:04.985059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.519 qpair failed and we were unable to recover it. 00:30:30.519 [2024-11-20 07:31:04.985390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.985400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.985579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.985588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.985899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.985910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.986228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.986238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.986530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.986540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.986912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.986923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.987138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.987148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.987478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.987490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.987722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.987732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.988031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.988041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.988235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.988245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.988621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.988631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.988855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.988872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.989186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.989196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.989408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.989417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.989614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.989623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.989846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.989857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.990066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.990077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.990488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.990501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.990825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.990835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.991012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.991023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.991197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.991207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.991520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.991530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.991887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.991898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.992231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.992241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.992605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.992615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.992926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.992937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.993298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.993308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.993622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.993632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.993855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.993870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.994252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.994262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.994429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.994439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.994741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.994751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.995032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.520 [2024-11-20 07:31:04.995043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.520 qpair failed and we were unable to recover it. 00:30:30.520 [2024-11-20 07:31:04.995098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.995108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.995433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.995443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.995743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.995753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.996126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.996136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.996318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.996329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.996755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.996765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.996985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.996996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.997327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.997336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.997659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.997669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.997938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.997948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.998293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.998303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.998568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.998580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.998912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.998922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.999203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.999214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.999508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.999518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:04.999807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:04.999818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.000139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.000150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.000474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.000484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.000682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.000692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.000949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.000965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.001207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.001217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.001511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.001521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.001819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.001829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.002175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.002185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.002503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.002513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.002740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.002750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.002946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.002956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.003342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.003352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.003537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.003547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.003906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.003916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.004336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.004346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.004628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.004638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.004936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.004947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.521 [2024-11-20 07:31:05.005263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.521 [2024-11-20 07:31:05.005272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.521 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.005651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.005661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.006027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.006037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.006204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.006214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.006385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.006395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.006713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.006723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.006940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.006951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.007279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.007289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.007629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.007639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.007834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.007844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.008161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.008172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.008500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.008510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.008753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.008764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.009075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.009086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.009252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.009262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.009427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.009437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.009761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.009771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.010074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.010084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.010271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.010281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.010687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.010698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.011043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.011054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.011347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.011356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.011536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.011547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.011870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.011881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.012245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.012255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.012548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.012558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.012747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.012756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.013061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.013071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.013411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.013421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.013729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.013739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.013918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.013928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.014272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.014282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.522 [2024-11-20 07:31:05.014456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.522 [2024-11-20 07:31:05.014466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.522 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.014800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.014810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.015104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.015115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.015488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.015497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.015782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.015793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.016107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.016118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.016429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.016438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.016726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.016736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.016930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.016941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.017219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.017228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.017407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.017418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.017690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.017700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.017871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.017882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.018165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.018175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.018509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.018521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.018847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.018856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.019194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.019204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.019551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.019561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.019871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.019881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.020178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.020188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.020469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.020479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.020790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.020800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.020968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.020980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.021177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.021187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.021394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.021404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.021733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.021743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.022131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.022141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.022456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.022466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.022824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.022834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.023143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.023154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.023494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.023503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.523 [2024-11-20 07:31:05.023853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.523 [2024-11-20 07:31:05.023867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.523 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.024080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.024090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.024415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.024424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.024717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.024726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.025096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.025107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.025387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.025396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.025690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.025700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.026065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.026075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.026279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.026288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.026601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.026611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.026904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.026917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.027214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.027225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.027439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.027449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.027728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.027738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.028048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.028058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.028374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.028384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.028717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.028727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.029062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.029072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.029365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.029374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.029702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.029712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.029988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.029998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.030190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.030200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.030523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.030532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.030813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.030823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.031170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.031181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.031522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.524 [2024-11-20 07:31:05.031532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.524 qpair failed and we were unable to recover it. 00:30:30.524 [2024-11-20 07:31:05.031821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.031831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.032142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.032152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.032439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.032449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.032637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.032647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.032970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.032980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.033269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.033279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.033591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.033601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.033915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.033925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.034248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.034257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.034542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.034552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.034839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.034848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.035191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.035201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.035529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.035539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.035866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.035877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.036214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.036224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.036443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.036453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.036765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.036775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.037075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.037086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.037294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.037304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.037593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.037603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.037778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.037788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.038073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.038083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.038274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.038284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.038655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.038665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.039040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.039050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.039361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.039371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.039684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.525 [2024-11-20 07:31:05.039694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.525 qpair failed and we were unable to recover it. 00:30:30.525 [2024-11-20 07:31:05.040015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.040026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.040345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.040354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.040709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.040719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.041025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.041036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.041336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.041345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.041562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.041571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.041898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.041908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.042196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.042205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.042521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.042531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.042849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.042859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.043161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.043171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.043355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.043365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.043701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.043711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.044111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.044121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.044415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.044425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.044594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.044604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.044891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.044902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.045308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.045318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.045644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.045653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.045974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.045984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.046373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.046383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.046685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.046694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.046983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.046994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.047305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.047315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.047611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.047621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.047958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.047971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.048277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.048287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.048601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.048611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.048898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.048909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.049219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.049229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.526 qpair failed and we were unable to recover it. 00:30:30.526 [2024-11-20 07:31:05.049435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.526 [2024-11-20 07:31:05.049444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.049778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.049787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.049967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.049977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.050264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.050274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.050603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.050612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.050945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.050955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.051328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.051338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.051710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.051720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.052019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.052030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.052358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.052368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.052534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.052545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.052890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.052900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.053228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.053237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.053525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.053534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.053823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.053833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.054159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.054169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.054515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.054525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.054877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.054888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.055230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.055240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.055530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.055539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.055881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.055891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.056232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.056241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.056407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.056420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.056695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.056705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.057050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.057060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.057404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.057414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.057612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.057622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.057932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.057942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.058246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.058256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.058583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.058593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.058768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.058777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.527 [2024-11-20 07:31:05.059115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.527 [2024-11-20 07:31:05.059126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.527 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.059423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.059433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.059715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.059725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.060005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.060015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.060197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.060206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.060489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.060499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.060788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.060797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.061115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.061125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.061324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.061335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.061628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.061637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.061942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.061953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.062265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.062275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.062589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.062598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.062938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.062949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.063290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.063300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.063580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.063590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.063909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.063919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.064234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.064244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.064527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.064539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.064892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.064902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.065213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.065222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.065586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.065596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.065898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.065908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.066202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.066212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.066504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.066514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.066832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.066842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.067155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.067166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.067506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.067516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.067825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.067835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.068184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.068195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.068535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.068545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.068895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.068905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.528 [2024-11-20 07:31:05.069190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.528 [2024-11-20 07:31:05.069200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.528 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.069481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.069491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.069820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.069830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.070150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.070160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.070444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.070454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.070742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.070752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.071127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.071138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.071450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.071460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.071752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.071761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.072094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.072104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.072441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.072451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.072733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.072742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.072943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.072953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.073285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.073295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.073623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.073632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.073924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.073934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.074121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.074132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.074460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.074470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.074745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.074755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.075080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.075091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.075450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.075460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.075743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.075752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.076127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.076137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.076483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.076493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.076777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.076787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.077111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.077122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.077329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.077339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.077670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.077680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.077967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.077977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.078272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.078282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.078612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.529 [2024-11-20 07:31:05.078621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.529 qpair failed and we were unable to recover it. 00:30:30.529 [2024-11-20 07:31:05.078957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.078967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.079277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.079287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.079606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.079616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.079901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.079911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.080236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.080246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.080600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.080610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.080797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.080808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.080989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.081000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.081289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.081299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.081634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.081644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.082016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.082026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.082392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.082402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.082602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.082611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.082944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.082954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.083257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.083267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.083453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.083464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.083763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.083772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.084057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.084067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.084356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.084366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.084649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.084659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.084939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.084950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.085244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.085254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.085536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.085546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.085866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.085879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.086227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.086236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.086437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.086446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.086776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.530 [2024-11-20 07:31:05.086786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.530 qpair failed and we were unable to recover it. 00:30:30.530 [2024-11-20 07:31:05.087105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.087115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.087453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.087463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.087798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.087808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.088092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.088102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.088437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.088446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.088723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.088733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.089081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.089092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.089430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.089440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.089765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.089775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.090080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.090090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.090375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.090385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.090586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.090595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.090919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.090929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.091248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.091258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.091599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.091608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.091928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.091938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.092279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.092288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.092593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.092603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.092885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.092896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.093207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.093217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.093499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.093508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.093848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.093857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.094201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.094211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.094525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.094537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.094889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.094900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.095182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.095192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.095479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.095488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.095818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.095828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.096184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.096194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.096493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.096503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.096816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.096825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.097132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.097143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.531 [2024-11-20 07:31:05.097482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.531 [2024-11-20 07:31:05.097492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.531 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.097779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.097789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.098073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.098083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.098377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.098387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.098673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.098683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.099010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.099021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.099205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.099216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.099557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.099568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.099917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.099927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.100254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.100264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.100602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.100612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.100923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.100934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.101259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.101268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.101550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.101559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.101849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.101858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.102190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.102201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.102535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.102544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.102825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.102835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.103206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.103216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.103423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.103433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.103766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.103775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.103953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.103963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.104249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.104259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.104454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.104465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.104818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.104829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.105120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.105131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.105473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.105482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.105785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.105795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.106005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.106015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.106329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.532 [2024-11-20 07:31:05.106338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.532 qpair failed and we were unable to recover it. 00:30:30.532 [2024-11-20 07:31:05.106627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.106637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.106932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.106942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.107244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.107254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.107535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.107545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.107838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.107847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.108195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.108205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.108520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.108530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.108839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.108849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.109054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.109065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.109362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.109372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.109690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.109700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.110014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.110024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.110367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.110377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.110665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.110675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.110973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.110983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.111294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.111304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.111643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.111653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.111986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.111996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.112323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.112333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.112501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.112511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.112786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.112796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.113133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.113143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.113428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.113438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.113767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.113776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.114178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.114188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.114525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.114535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.114818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.114827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.115202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.115212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.115514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.115523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.115847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.115861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.116183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.116194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.116486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.116496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.116687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.116698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.533 [2024-11-20 07:31:05.117067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.533 [2024-11-20 07:31:05.117077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.533 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.117388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.117399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.117705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.117715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.118032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.118043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.118361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.118371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.118553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.118563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.118894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.118904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.119195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.119204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.119493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.119503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.119688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.119700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.120021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.120032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.120360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.120370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.120709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.120719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.120909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.120920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.121112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.121122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.121397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.121406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.121687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.121697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.122013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.122024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.122317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.122327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.122633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.122643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.122949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.122960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.123296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.123306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.123616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.123626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.123968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.123980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.124311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.124321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.124635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.124645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.124955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.124965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.125303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.125312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.125621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.125630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.125914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.125924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.126206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.126216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.126527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.126536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.126876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.534 [2024-11-20 07:31:05.126887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.534 qpair failed and we were unable to recover it. 00:30:30.534 [2024-11-20 07:31:05.127208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.127218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.127432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.127442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.127767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.127776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.128058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.128068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.128272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.128282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.128597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.128607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.129037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.129047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.129386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.129396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.129769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.129779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.130087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.130097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.130422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.130432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.130701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.130711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.130982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.130993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.131317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.131327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.131632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.131641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.131804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.131815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.132083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.132093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.132379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.132391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.132764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.132774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.133077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.133087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.133363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.133373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.133711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.133720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.134008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.134018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.134322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.134332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.134709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.134719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.135033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.135043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.135222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.135232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.135593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.135602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.135942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.135953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.136253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.136263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.136635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.535 [2024-11-20 07:31:05.136645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.535 qpair failed and we were unable to recover it. 00:30:30.535 [2024-11-20 07:31:05.136833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.136844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.137189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.137200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.137401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.137411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.137709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.137719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.138031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.138041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.138376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.138385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.138676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.138685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.139024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.139034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.139374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.139384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.139719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.139729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.140041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.140052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.140360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.140370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.140678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.140688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.141001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.141012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.141375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.141385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.141684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.141694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.142062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.142073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.142419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.142429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.142710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.142720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.143097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.143107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.143488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.143498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.143835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.143845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.144156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.144166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.144506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.144515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.144824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.144834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.145161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.145171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.145444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.145454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.145764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.145774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.146076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.146086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.146407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.536 [2024-11-20 07:31:05.146416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.536 qpair failed and we were unable to recover it. 00:30:30.536 [2024-11-20 07:31:05.146750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.146759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.147076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.147087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.147395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.147404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.147708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.147719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.148019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.148029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.148397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.148407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.148719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.148729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.149029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.149040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.149322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.149332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.149671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.149680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.149983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.149993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.150284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.150294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.150575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.150585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.150899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.150909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.151222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.151231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.151514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.151524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.151840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.151849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.152163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.152173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.152477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.152487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.152769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.152779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.153076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.153086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.153357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.153367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.153699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.153709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.154008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.154018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.154369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.154380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.154564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.154575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.154954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.154965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.537 [2024-11-20 07:31:05.155303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.537 [2024-11-20 07:31:05.155313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.537 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.155665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.155676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.155994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.156005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.156351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.156361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.156704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.156714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.156918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.156928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.157194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.157204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.157489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.157499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.157822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.157832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.158122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.158132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.158451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.158461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.158802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.158812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.159095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.159105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.159443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.159453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.159745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.159755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.159946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.159957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.160261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.160271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.160600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.160609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.160895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.160905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.161082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.161093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.161421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.161431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.161761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.161770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.162090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.162100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.162426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.162436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.162723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.162735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.163048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.163059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.163252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.163263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.163536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.163546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.163880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.163891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.164220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.164229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.164521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.164531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.164832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.538 [2024-11-20 07:31:05.164842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.538 qpair failed and we were unable to recover it. 00:30:30.538 [2024-11-20 07:31:05.165161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.165171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.165450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.165460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.165807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.165817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.166110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.166121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.166392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.166402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.166684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.166693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.167069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.167080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.167398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.167408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.167740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.167750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.168067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.168078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.168377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.168387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.168696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.168706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.169024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.169035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.169411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.169421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.169697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.169707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.169883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.169893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.170208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.170218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.170418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.170428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.170668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.170678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.170984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.170995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.171335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.171345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.171662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.171672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.171963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.171973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.172172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.172181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.172520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.172530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.172812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.172821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.173167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.173177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.173514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.173523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.173859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.173874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.174194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.174204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.174488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.174497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.174696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.174706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.175041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.175052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.539 qpair failed and we were unable to recover it. 00:30:30.539 [2024-11-20 07:31:05.175401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.539 [2024-11-20 07:31:05.175411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.175582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.175592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.175912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.175922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.176220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.176230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.176587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.176597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.176893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.176904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.177212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.177222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.177558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.177568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.177874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.177884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.178224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.178234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.178525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.178535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.178811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.178821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.179183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.179194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.179384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.179395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.179728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.179738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.180029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.180039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.180330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.180340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.180650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.180660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.180976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.180987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.181178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.181189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.181520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.181530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.181744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.181754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.182149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.182159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.182470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.182480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.182777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.182787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.183122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.183133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.183406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.183416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.183614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.183627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.183947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.183958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.184273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.184283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.184579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.540 [2024-11-20 07:31:05.184589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.540 qpair failed and we were unable to recover it. 00:30:30.540 [2024-11-20 07:31:05.184911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.184921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.185200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.185210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.185526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.185536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.185837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.185847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.186134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.186144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.186435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.186446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.186743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.186754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.187080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.187091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.187370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.187380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.187775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.187784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.188086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.188097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.188424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.188434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.188724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.188734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.189014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.189024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.189326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.189336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.189646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.189656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.189957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.189967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.190285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.190294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.190580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.190590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.190885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.190896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.191232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.191241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.191568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.191578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.191842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.191852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.192047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.192060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.192352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.192362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.192661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.192671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.192995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.193006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.193292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.541 [2024-11-20 07:31:05.193303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.541 qpair failed and we were unable to recover it. 00:30:30.541 [2024-11-20 07:31:05.193633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.193643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.193957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.193967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.194295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.194305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.194607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.194618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.194897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.194907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.195204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.195214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.195304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.195314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.195577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.195589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.195899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.195910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.196196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.196206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.196505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.196516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.196737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.196748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.197053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.197064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.197392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.197402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.197706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.197716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.197923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.197934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.198272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.198282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.198596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.198606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.198914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.198925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.199139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.199149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.199339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.199349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.199672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.199683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.199969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.199982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.200198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.200208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.200550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.200560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.200891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.200902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.201232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.201241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.201532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.201542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.201748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.201759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.542 [2024-11-20 07:31:05.202064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.542 [2024-11-20 07:31:05.202074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.542 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.202408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.202419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.202619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.202629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.202916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.202927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.203244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.203254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.203471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.203481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.203835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.203845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.204021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.204033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.204359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.204369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.204658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.204672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.204770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.204779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.205042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.205053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.205346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.205355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.205732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.205742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.205936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.205946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.206152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.206163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.206452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.206462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.206816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.206826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.207195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.207205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.207591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.207601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.207892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.207903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.208231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.208240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.208570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.208580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.208870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.208881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.209070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.209080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.209426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.209435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.209717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.209727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.210123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.210133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.210356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.210366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.210669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.210680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.210984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.210995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.211313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.543 [2024-11-20 07:31:05.211323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.543 qpair failed and we were unable to recover it. 00:30:30.543 [2024-11-20 07:31:05.211633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.211643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.211998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.212008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.212350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.212360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.212658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.212668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.212970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.212981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.213355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.213365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.213565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.213574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.213937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.213948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.214264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.214273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.214380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.214389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.214773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.214783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.214934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.214944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.215301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.215311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.215512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.215522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.215739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.215749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.216055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.216065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.216350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.216360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.216734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.216744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.217062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.217072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.217277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.217287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.217619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.217628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.217927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.217938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.218271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.218281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.218496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.218506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.218795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.218805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.219005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.219015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.219403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.219413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.219592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.219602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.219921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.219932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.220085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.220098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.220590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.220680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.221096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.221187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.221385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.544 [2024-11-20 07:31:05.221424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:30.544 qpair failed and we were unable to recover it. 00:30:30.544 [2024-11-20 07:31:05.221888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.221921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.222257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.222269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.222588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.222598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.222897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.222908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.223259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.223269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.223674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.223684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.223978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.223988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.224281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.224291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.224482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.224492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.224666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.224675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.225004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.225014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.225361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.225371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.225716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.225726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.225941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.225951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.226288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.226298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.226596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.226606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.226804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.226814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.226997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.227006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.227309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.227319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.227505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.227514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.227801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.227811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.228013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.228023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.228340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.228350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.228550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.228563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.228852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.228873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.229250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.229260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.229562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.229572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.229874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.229885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.230197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.230207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.545 [2024-11-20 07:31:05.230512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.545 [2024-11-20 07:31:05.230522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.545 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.230838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.230848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.230917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.230927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.231230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.231240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.231569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.231579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.231915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.231926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.232271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.232281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.232462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.232472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.232793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.232803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.233173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.233184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.233376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.233387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.233678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.233688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.233983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.233993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.234282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.234292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.234595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.234605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.234796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.234806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.235147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.235158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.235478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.235488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.235774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.235784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.236131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.236142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.236459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.236469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.236787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.236800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.237103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.237114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.237493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.237504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.237714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.237724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.238025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.238035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.238379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.238389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.238717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.238726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.239111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.239122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.239420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.239430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.239714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.239723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.240065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.240075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.240243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.546 [2024-11-20 07:31:05.240252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.546 qpair failed and we were unable to recover it. 00:30:30.546 [2024-11-20 07:31:05.240556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.240566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.240884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.240895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.241291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.241301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.241608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.241618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.241951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.241961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.242257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.242267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.242597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.242606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.242907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.242917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.243249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.243258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.243552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.243562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.243887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.243898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.244207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.244217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.244545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.244556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.244730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.244739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.245051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.245062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.245418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.245428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.245660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.245670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.245987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.245997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.246310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.246319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.246634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.246644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.246950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.246961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.247271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.247281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.247616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.247626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.247918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.247929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.248129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.248140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.248442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.248452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.248788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.248798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.249179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.547 [2024-11-20 07:31:05.249189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.547 qpair failed and we were unable to recover it. 00:30:30.547 [2024-11-20 07:31:05.249494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.548 [2024-11-20 07:31:05.249504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.548 qpair failed and we were unable to recover it. 00:30:30.548 [2024-11-20 07:31:05.249792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.548 [2024-11-20 07:31:05.249802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.548 qpair failed and we were unable to recover it. 00:30:30.548 [2024-11-20 07:31:05.250115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.548 [2024-11-20 07:31:05.250125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.548 qpair failed and we were unable to recover it. 00:30:30.548 [2024-11-20 07:31:05.250312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.548 [2024-11-20 07:31:05.250321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.548 qpair failed and we were unable to recover it. 00:30:30.548 [2024-11-20 07:31:05.250608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.548 [2024-11-20 07:31:05.250618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.548 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.250961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.250973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.251174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.251185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.251580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.251589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.251918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.251929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.252142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.252152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.252459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.252469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.252641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.252650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.252933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.252943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.253321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.253331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.253627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.253636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.254031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.254042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.254255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.254265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.254561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.254571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.254836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.254846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.825 qpair failed and we were unable to recover it. 00:30:30.825 [2024-11-20 07:31:05.255206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.825 [2024-11-20 07:31:05.255216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.255561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.255571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.255767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.255778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.256033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.256044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.256400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.256410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.256724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.256734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.257048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.257059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.257363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.257373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.257668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.257678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.257860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.257882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.258179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.258189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.258537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.258547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.258727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.258737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.258924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.258934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.259303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.259313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.259624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.259634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.260019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.260030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.260402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.260412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.260696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.260707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.260928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.260939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.261272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.261282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.261612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.261622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.261986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.261997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.262343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.262353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.262675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.262685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.262894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.262904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.263282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.263292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.263577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.263587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.263759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.263770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.264074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.264084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.264406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.264416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.264753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.264762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.265104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.265114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.265430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.265440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.265726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.265736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.266071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.266081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.266420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.266433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.826 [2024-11-20 07:31:05.266757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.826 [2024-11-20 07:31:05.266767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.826 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.267082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.267092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.267288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.267298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.267598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.267608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.267944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.267954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.268272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.268281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.268575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.268585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.268819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.268828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.269135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.269146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.269476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.269486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.269801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.269812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.270169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.270179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.270463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.270475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.270823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.270833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.271197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.271208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.271518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.271528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.271820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.271830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.272147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.272157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.272475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.272485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.272808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.272818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.273140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.273150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.273463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.273474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.273692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.273702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.273998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.274008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.274309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.274318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.274522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.274531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.274735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.274744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.275008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.275019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.275345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.275355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.275695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.275705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.276049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.276060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.276343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.276353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.276690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.276699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.276903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.276913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.277239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.277249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.277528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.277538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.277853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.277867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.278168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.278178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.827 [2024-11-20 07:31:05.278490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.827 [2024-11-20 07:31:05.278500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.827 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.278811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.278820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.279148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.279158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.279492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.279501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.279695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.279706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.280035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.280045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.280338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.280348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.280664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.280674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.281006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.281017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.281328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.281338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.281677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.281687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.281992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.282002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.282313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.282323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.283070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.283095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.283433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.283444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.283741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.283751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.284100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.284113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.284430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.284441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.284750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.284761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.285091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.285103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.285504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.285514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.285839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.285850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.286053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.286065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.286360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.286371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.286686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.286696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.286987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.286997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.287175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.287185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.287568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.287578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.287884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.287895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.288236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.288249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.288446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.288456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.288777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.288787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.289093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.289104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.289446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.289456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.289795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.289805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.290146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.290156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.290471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.290481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.290646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.828 [2024-11-20 07:31:05.290656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.828 qpair failed and we were unable to recover it. 00:30:30.828 [2024-11-20 07:31:05.290966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.290977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.291308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.291318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.291626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.291637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.291925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.291936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.292252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.292262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.292586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.292596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.292927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.292938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.293256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.293266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.293555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.293565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.293738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.293749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.294056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.294067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.294349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.294367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.294600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.294610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.294910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.294920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.295259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.295270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.295573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.295583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.295870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.295881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.296197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.296207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.296398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.296411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.296735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.296746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.297051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.297062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.297347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.297358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.297664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.297675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.297989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.298000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.298281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.298291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.298595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.298605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.298934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.298945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.299274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.299285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.299567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.299577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.299880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.299892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.300230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.300240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.300550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.300560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.300845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.300855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.301149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.301159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.301373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.301383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.301581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.301591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.301891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.301901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.829 [2024-11-20 07:31:05.302246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.829 [2024-11-20 07:31:05.302256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.829 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.302590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.302599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.302761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.302772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.303011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.303022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.303382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.303392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.303713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.303723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.304073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.304083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.304388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.304398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.304708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.304727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.305040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.305052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.305389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.305400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.305703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.305713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.306004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.306014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.306336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.306346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.306648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.306658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.306939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.306950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.307253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.307262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.307417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.307427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.307795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.307805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.308006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.308016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.308363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.308372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.308683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.308693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.309019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.309030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.309342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.309352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.309687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.309696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.310033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.310043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.310372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.310382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.310744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.310754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.311045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.311056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.311354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.311364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.311675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.311686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.312013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.830 [2024-11-20 07:31:05.312024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.830 qpair failed and we were unable to recover it. 00:30:30.830 [2024-11-20 07:31:05.312440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.312450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.312736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.312746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.313060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.313070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.313357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.313372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.313665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.313675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.314050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.314060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.314395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.314405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.314737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.314746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.315006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.315016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.315188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.315200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.315528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.315538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.315761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.315771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.316078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.316089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.316404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.316414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.316741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.316751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.317113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.317123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.317407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.317417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.317739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.317749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.318036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.318048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.318343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.318353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.318728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.318738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.319036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.319047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.319716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.319736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.320087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.320099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.320282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.320292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.320627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.320637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.320929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.320939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.321250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.321260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.321541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.321552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.321848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.321859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.322066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.322077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.322371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.322382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.322539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.322549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.322617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.322627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.322842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.322851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.323096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.323106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.323430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.323440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.323746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.831 [2024-11-20 07:31:05.323756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.831 qpair failed and we were unable to recover it. 00:30:30.831 [2024-11-20 07:31:05.324089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.324100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.324434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.324443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.324742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.324752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.325074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.325084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.325376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.325386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.325554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.325566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.325757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.325774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.326204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.326215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.326500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.326510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.326858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.326874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.327187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.327198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.327497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.327507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.327811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.327820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.328123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.328133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.328528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.328538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.328806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.328815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.329129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.329140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.329466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.329476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.329765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.329774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.330138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.330149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.330546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.330557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.330781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.330790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.331086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.331097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.331376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.331385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.331708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.331718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.332042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.332052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.332291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.332301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.332652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.332662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.332834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.332845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.333240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.333251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.333539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.333549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.333719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.333729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.334036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.334046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.334250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.334262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.334603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.334613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.334930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.334941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.335286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.335296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.832 [2024-11-20 07:31:05.335620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.832 [2024-11-20 07:31:05.335630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.832 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.335932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.335943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.336271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.336281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.336568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.336578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.336763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.336774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.337061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.337071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.337445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.337456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.337774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.337783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.338160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.338171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.338490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.338500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.338788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.338798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.339118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.339129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.339441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.339451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.339781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.339790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.340009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.340019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.340310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.340321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.340639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.340649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.340978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.340988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.341199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.341209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.341576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.341585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.341925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.341935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.342127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.342139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.342456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.342465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.342826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.342836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.343066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.343077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.343356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.343366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.343731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.343742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.344055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.344065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.344429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.344439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.344774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.344784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.345106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.345117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.345422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.345432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.345767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.345777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.346151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.346162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.346462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.346472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.346800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.346810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.347122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.347133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.347468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.347479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.833 [2024-11-20 07:31:05.347793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.833 [2024-11-20 07:31:05.347804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.833 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.348171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.348181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.348401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.348410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.348774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.348784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.348983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.348993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.349295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.349305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.349618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.349627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.349897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.349907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.350118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.350129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.350446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.350456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.350744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.350755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.350935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.350946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.351230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.351240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.351535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.351546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.351748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.351759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.352082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.352092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.352406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.352416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.352594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.352604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.352801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.352810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.353072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.353083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.353378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.353388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.353705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.353715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.354033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.354043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.354335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.354346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.354641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.354651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.354901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.354911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.355249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.355262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.355585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.355596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.355877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.355887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.356249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.356258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.356428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.356438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.356710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.356720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.357005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.357016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.357319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.357329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.834 qpair failed and we were unable to recover it. 00:30:30.834 [2024-11-20 07:31:05.357620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.834 [2024-11-20 07:31:05.357631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.357972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.357982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.358311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.358321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.358609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.358619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.358935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.358945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.359164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.359174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.359491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.359501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.359558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.359567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.359769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.359778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.360054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.360064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.360280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.360290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.360608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.360617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.360915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.360925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.361285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.361295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.361605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.361614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.361895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.361905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.362306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.362316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.362626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.362635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.362953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.362963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.363267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.363279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.363608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.363618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.363977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.363987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.364324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.364334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.364620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.364629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.364876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.364886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.365250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.365260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.365537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.365546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.365838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.365848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.366169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.366180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.366493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.366503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.366732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.366741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.367098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.367108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.367285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.367296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.367586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.367596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.367887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.367898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.368203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.368212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.368607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.368617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.368831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.368841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.835 qpair failed and we were unable to recover it. 00:30:30.835 [2024-11-20 07:31:05.369171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.835 [2024-11-20 07:31:05.369181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.369469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.369480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.369800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.369811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.370118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.370128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.370310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.370320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.370613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.370623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.370825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.370834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.371176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.371187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.371530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.371542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.371912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.371922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.372280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.372290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.372589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.372599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.372907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.372917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.373126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.373136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.373471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.373480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.373814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.373824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.374002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.374012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.374329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.374338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.374804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.374814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.375205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.375227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.375566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.375576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.375869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.375879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.376103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.376113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.376446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.376456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.376664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.376673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.377006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.377017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.377389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.377399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.377607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.377617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.377927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.377937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.378278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.378288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.378465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.378474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.378760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.378770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.379176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.379186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.379405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.379415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.379714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.379724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.380038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.380048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.380228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.380239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.380583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.380593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.836 qpair failed and we were unable to recover it. 00:30:30.836 [2024-11-20 07:31:05.380889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.836 [2024-11-20 07:31:05.380900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.381212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.381222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.381527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.381537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.381825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.381835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.382148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.382158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.382446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.382456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.382763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.382772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.383086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.383097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.383391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.383401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.383702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.383712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.384022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.384032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.384355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.384366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.384688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.384698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.385011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.385021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.385332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.385342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.385593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.385602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.385996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.386006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.386309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.386319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.386606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.386616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.386930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.386940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.387164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.387174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.387512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.387522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.387838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.387847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.388181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.388192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.388475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.388485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.388837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.388846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.389202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.389212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.389509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.389519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.389855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.389869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.390170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.390180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.390491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.390500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.390803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.390812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.391106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.391116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.391414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.391424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.391751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.391761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.392094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.392105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.392420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.392429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.392728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.392737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.837 qpair failed and we were unable to recover it. 00:30:30.837 [2024-11-20 07:31:05.393035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.837 [2024-11-20 07:31:05.393048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.393378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.393388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.393616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.393626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.393932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.393942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.394248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.394257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.394574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.394584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.394927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.394938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.395247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.395256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.395575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.395585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.395896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.395906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.396266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.396275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.396560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.396570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.396855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.396869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.397164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.397173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.397370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.397380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.397690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.397700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.397994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.398005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.398323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.398332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.398647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.398656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.398974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.398984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.399320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.399330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.399518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.399529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.399858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.399873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.400216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.400226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.400443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.400452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.400681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.400691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.400981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.400991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.401179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.401192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.401443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.401453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.401734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.401744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.402048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.402059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.402360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.402369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.402658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.402668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.402973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.402984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.403372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.403382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.403683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.403693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.403928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.403939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.838 [2024-11-20 07:31:05.404245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.838 [2024-11-20 07:31:05.404254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.838 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.404557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.404566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.404868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.404878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.405177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.405186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.405402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.405411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.405726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.405737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.406100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.406110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.406416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.406427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.406772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.406782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.406991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.407001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.407333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.407342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.407648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.407659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.407968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.407978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.408295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.408304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.408603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.408613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.408929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.408939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.409286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.409297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.409607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.409618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.409928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.409940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.410254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.410264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.410570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.410580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.410896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.410907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.411236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.411246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.411535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.411546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.411897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.411909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.412216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.412226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.412532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.412545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.412884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.412895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.413239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.413249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.413475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.413485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.413796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.413806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.413991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.414002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.414343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.414354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.414692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.414702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.839 [2024-11-20 07:31:05.415027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.839 [2024-11-20 07:31:05.415038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.839 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.415230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.415240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.415559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.415569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.415837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.415847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.416139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.416149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.416476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.416485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.416768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.416777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.417135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.417145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.417446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.417456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.417781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.417791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.418107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.418118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.418454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.418464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.418748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.418758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.419072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.419082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.419462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.419473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.419690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.419700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.419901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.419913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.420215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.420226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.420552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.420561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.420767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.420776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.421068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.421078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.421392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.421402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.421693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.421702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.421986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.421996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.422336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.422349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.422694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.422704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.423019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.423030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.423374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.423384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.423663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.423673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.424012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.424022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.424394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.424405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.424701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.424711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.425080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.425091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.425433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.425443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.425756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.425766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.425972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.425982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.426296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.426306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.426506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.426515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.840 [2024-11-20 07:31:05.426837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.840 [2024-11-20 07:31:05.426847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.840 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.427144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.427154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.427451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.427461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.427798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.427808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.428120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.428131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.428443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.428452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.428635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.428644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.428850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.428861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.429185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.429195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.429536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.429546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.429826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.429835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.430186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.430196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.430423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.430433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.430766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.430778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.431090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.431101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.431432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.431442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.431626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.431636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.431915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.431925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.432222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.432231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.432396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.432407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.432674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.432686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.432979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.432990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1487828 Killed "${NVMF_APP[@]}" "$@" 00:30:30.841 [2024-11-20 07:31:05.433334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.433345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.433621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.433632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.433874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.433885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:30.841 [2024-11-20 07:31:05.434104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.434114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.434414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.434424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:30.841 [2024-11-20 07:31:05.434733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.434745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:30.841 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:30.841 [2024-11-20 07:31:05.435064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.435075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.841 [2024-11-20 07:31:05.435451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.435462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.435655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.435666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.435966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.435977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.436294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.436304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.436617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.436627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.436912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.436923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.841 [2024-11-20 07:31:05.437241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.841 [2024-11-20 07:31:05.437252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.841 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.437533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.437543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.437825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.437835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.438140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.438151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.438485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.438496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.438803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.438813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.439122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.439133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.439436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.439447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.439751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.439763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.440056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.440067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.440403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.440413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.440723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.440733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.441032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.441043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.441335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.441345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.441675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.441686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.441905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.441916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.442277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.442290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.442602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.442612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1488718 00:30:30.842 [2024-11-20 07:31:05.442919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.442931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1488718 00:30:30.842 [2024-11-20 07:31:05.443223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.443234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:30.842 [2024-11-20 07:31:05.443546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.443557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1488718 ']' 00:30:30.842 [2024-11-20 07:31:05.443746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.443758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.842 [2024-11-20 07:31:05.443948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.443961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.444148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.444159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:30.842 [2024-11-20 07:31:05.444451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.444462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.842 [2024-11-20 07:31:05.444562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.444572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:30.842 [2024-11-20 07:31:05.444870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.444882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.842 [2024-11-20 07:31:05.445082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.445094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.445437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.445448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.445659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.445670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.446017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.446029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.446360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.446371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.446706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.446717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.447008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.842 [2024-11-20 07:31:05.447020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.842 qpair failed and we were unable to recover it. 00:30:30.842 [2024-11-20 07:31:05.447351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.447362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.447674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.447684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.447982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.447994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.448315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.448327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.448626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.448637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.448820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.448831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.449194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.449209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.449487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.449497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.449850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.449868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.450165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.450176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.450397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.450408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.450725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.450736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.451028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.451040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.451353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.451364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.451652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.451663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.451974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.451986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.452314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.452325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.452641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.452653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.452972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.452986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.453322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.453333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.453678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.453690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.453985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.453996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.454328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.454339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.454665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.454676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.454849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.454867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.455179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.455191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.455522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.455533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.455718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.455729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.456035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.456046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.456241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.456252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.456433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.456444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.456648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.456659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.456980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.456992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.457183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.457195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.457370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.457380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.457762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.457774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.458005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.458017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.458359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.843 [2024-11-20 07:31:05.458369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.843 qpair failed and we were unable to recover it. 00:30:30.843 [2024-11-20 07:31:05.458731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.458741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.458932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.458943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.459299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.459311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.459621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.459631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.459962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.459973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.460172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.460182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.460393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.460403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.460733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.460745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.461048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.461060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.461368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.461379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.461726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.461738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.461941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.461952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.462266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.462276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.462602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.462612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.462910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.462921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.463256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.463266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.463446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.463456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.463787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.463798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.464010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.464021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.464214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.464224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.464438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.464449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.464810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.464821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.465124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.465135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.465423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.465433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.465697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.465707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.466027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.466039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.466331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.466342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.466650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.466660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.466964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.466975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.467306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.467317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.467643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.467653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.467844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.467854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.468107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.468117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.844 qpair failed and we were unable to recover it. 00:30:30.844 [2024-11-20 07:31:05.468456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.844 [2024-11-20 07:31:05.468466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.468794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.468806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.469015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.469025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.469425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.469434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.469621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.469632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.469987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.469998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.470349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.470360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.470683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.470694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.471018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.471029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.471377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.471388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.471590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.471600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.471920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.471931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.472133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.472144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.472361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.472370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.472687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.472697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.472890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.472901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.473222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.473232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.473541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.473551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.473875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.473886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.474256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.474266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.474613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.474623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.474946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.474957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.475273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.475283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.475612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.475623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.475814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.475824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.476001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.476012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.476306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.476318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.476490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.476499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.476811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.476821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.477143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.477153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.477572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.477582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.477985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.477996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.478175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.478186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.478516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.478527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.478698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.478709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.479045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.479055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.479377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.479387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.479703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.845 [2024-11-20 07:31:05.479714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.845 qpair failed and we were unable to recover it. 00:30:30.845 [2024-11-20 07:31:05.480017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.480027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.480204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.480214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.480600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.480611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.481005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.481016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.481214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.481225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.481515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.481525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.481692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.481702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.482017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.482027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.482323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.482334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.482706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.482716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.483032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.483043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.483209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.483219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.483629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.483640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.483959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.483969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.484188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.484198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.484549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.484559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.484870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.484882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.485236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.485246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.485604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.485614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.485994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.486005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.486226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.486235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.486549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.486558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.486777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.486787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.487109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.487120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.487457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.487468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.487776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.487786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.488162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.488172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.488519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.488529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.488703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.488713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.488991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.489002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.489318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.489328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.489550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.489562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.489643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.489654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.489957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.489968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.490169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.490179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.490431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.490442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.490783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.490793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.491125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.846 [2024-11-20 07:31:05.491135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.846 qpair failed and we were unable to recover it. 00:30:30.846 [2024-11-20 07:31:05.491429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.491439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.491753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.491763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.492126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.492136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.492465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.492475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.492669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.492679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.492886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.492896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.493253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.493263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.493621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.493632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.493968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.493979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.494174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.494184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.494363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.494373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.494710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.494720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.495076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.495086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.495401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.495411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.495764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.495774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.496085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.496095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.496260] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:30:30.847 [2024-11-20 07:31:05.496315] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.847 [2024-11-20 07:31:05.496398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.496409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.496750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.496761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.497085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.497096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.497415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.497428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.497742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.497752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.497969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.497980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.498265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.498277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.498632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.498642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.498700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.498710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.499011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.499022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.499202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.499213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.499525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.499535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.499858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.499874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.500224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.500236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.500567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.500577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.500919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.500930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.501315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.501325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.501557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.501567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.501764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.501774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.502081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.502092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.502444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.847 [2024-11-20 07:31:05.502455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.847 qpair failed and we were unable to recover it. 00:30:30.847 [2024-11-20 07:31:05.502641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.502651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.502999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.503010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.503340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.503351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.503523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.503534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.503845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.503854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.504197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.504207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.504563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.504574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.504762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.504773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.505080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.505090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.505441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.505454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.505749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.505760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.506211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.506222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.506615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.506625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.506866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.506876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.507184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.507195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.507500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.507510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.507697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.507708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.508025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.508036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.508375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.508385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.508668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.508679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.508979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.508995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.509164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.509175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.509453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.509463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.509807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.509818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.510000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.510010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.510334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.510344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.510662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.510672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.510860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.510877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.511200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.511211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.511550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.511560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.511878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.511889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.512175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.512185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.512487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.512496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.848 qpair failed and we were unable to recover it. 00:30:30.848 [2024-11-20 07:31:05.512793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.848 [2024-11-20 07:31:05.512804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.513130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.513141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.513472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.513483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.513718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.513731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.514008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.514018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.514224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.514235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.514513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.514523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.514872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.514883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.515259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.515269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.515588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.515598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.515892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.515904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.516239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.516248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.516546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.516556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.516730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.516740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.517081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.517091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.517442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.517453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.517810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.517820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.518125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.518136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.518434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.518444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.518778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.518788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.519153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.519164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.519511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.519521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.519891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.519902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.520292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.520302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.520615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.520626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.520924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.520935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.521242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.521253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.521598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.521608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.521970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.521981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.522321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.522331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.522615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.522625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.522985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.522995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.523237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.523248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.523541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.523550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.523874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.523884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.524190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.524200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.524495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.524505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.524783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.849 [2024-11-20 07:31:05.524793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.849 qpair failed and we were unable to recover it. 00:30:30.849 [2024-11-20 07:31:05.525111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.525122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.525403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.525413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.525700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.525710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.525996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.526006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.526355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.526366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.526650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.526660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.526855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.526872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.527188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.527199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.527364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.527375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.527651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.527662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.527856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.527876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.528197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.528207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.528478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.528489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.528830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.528840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.529128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.529139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.529482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.529492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.529664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.529674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.529975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.529986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.530314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.530324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.530620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.530629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.530960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.530971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.531164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.531174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.531517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.531527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.531879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.531890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.532185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.532195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.532488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.532498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.532785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.532795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.533127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.533138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.533323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.533333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.533670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.533680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.533967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.533977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.534171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.534181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.534617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.534627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.534890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.534902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.535113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.535124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.535482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.535492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.535797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.535806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.536157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.850 [2024-11-20 07:31:05.536167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.850 qpair failed and we were unable to recover it. 00:30:30.850 [2024-11-20 07:31:05.536468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.536478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.536821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.536831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.537032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.537042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.537344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.537354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.537655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.537665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.537995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.538005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.538357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.538367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.538667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.538677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.538965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.538976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.539325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.539336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.539678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.539689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.540004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.540015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.540365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.540375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.540677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.540687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.540998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.541009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.541194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.541204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.541595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.541605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.541917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.541929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.542232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.542242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.542613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.542623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.542929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.542940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.543324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.543334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.543648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.543659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.543987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.543997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.544335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.544345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.544535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.544545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.544876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.544887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.545099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.545109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.545335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.545346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.545692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.545702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.546037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.546048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.546393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.546403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.546608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.546618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.546943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.546953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.547269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.547279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.547598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.547608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.547929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.547940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.548291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.851 [2024-11-20 07:31:05.548302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.851 qpair failed and we were unable to recover it. 00:30:30.851 [2024-11-20 07:31:05.548633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.548643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.548991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.549001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.549204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.549214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.549382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.549391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.549565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.549575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.549850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.549860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.550198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.550207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.550516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.550526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.550812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.550822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.551119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.551130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.551295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.551305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.551586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.551599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.551933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.551943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.552294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.552304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.552597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.552608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.552926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.552936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.553255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.553265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.553596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.553606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.553886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.553896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.554219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.554229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.554389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.554399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.554721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.554732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.555050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.555060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.555377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.555388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.555729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.555739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.555944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.555954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.556255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.556264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.556311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.556321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.556591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.556601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.556909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.556919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.557094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.557104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.557398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.557407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.557749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.557759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.558125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.558136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.558423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.558433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.558718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.558728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.559032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.559043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.559326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.559336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.852 qpair failed and we were unable to recover it. 00:30:30.852 [2024-11-20 07:31:05.559566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.852 [2024-11-20 07:31:05.559576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.559763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.559775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.560111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.560122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.560417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.560427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.560745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.560755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.561069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.561079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.561419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.561430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.561761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.561771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.561982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.561993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.562195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.562205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.562600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.562610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.562897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.562908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.563212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.563222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.563512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.563522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.563736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.563748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.564040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.564051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.564356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.564366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.564652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.564662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.564985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.564996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.565347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.565359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.565672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.565682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.565992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.566002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.566348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.566358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.566540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.566550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.566842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.566852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.567185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.567195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.567526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.567536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.567840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.567850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.568180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.568191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.568516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.568528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.568829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.568845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.569248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.569260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.569461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.569471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.569808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.853 [2024-11-20 07:31:05.569818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.853 qpair failed and we were unable to recover it. 00:30:30.853 [2024-11-20 07:31:05.570142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.570152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.570453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.570463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.570788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.570798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.571139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.571149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.571474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.571485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.571689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.571700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.572049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.572060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.572456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.572471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.572705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.572715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.573037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.573048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.573366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.573377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.573582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.573592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.573707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:30.854 [2024-11-20 07:31:05.573944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.573957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.574349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.574359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.574704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.574714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:30.854 [2024-11-20 07:31:05.574898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.854 [2024-11-20 07:31:05.574909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:30.854 qpair failed and we were unable to recover it. 00:30:31.128 [2024-11-20 07:31:05.575164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.128 [2024-11-20 07:31:05.575175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.128 qpair failed and we were unable to recover it. 00:30:31.128 [2024-11-20 07:31:05.575371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.128 [2024-11-20 07:31:05.575381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.575689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.575701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.575931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.575942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.576276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.576288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.576642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.576652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.576854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.576876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.577197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.577209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.577542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.577553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.577745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.577757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.578327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.578418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.578859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.578913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.579299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.579388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe54000b90 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.579754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.579767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.580104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.580115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.580425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.580436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.580743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.580753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.580827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.580837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.581163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.581174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.581497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.581508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.581868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.581879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.582172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.582183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.582485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.582496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.582706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.582716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.583022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.583033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.583366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.583377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.583727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.583737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.584060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.584071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.584405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.584415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.584732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.584743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.584966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.584977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.585296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.585306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.585582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.585592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.585937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.585947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.586134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.586144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.586433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.586444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.586800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.586810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.587177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.587188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.129 [2024-11-20 07:31:05.587402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.129 [2024-11-20 07:31:05.587414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.129 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.587758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.587768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.588045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.588055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.588398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.588409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.588759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.588770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.589122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.589133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.589428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.589439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.589731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.589744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.589915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.589926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.590121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.590132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.590435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.590446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.590777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.590788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.591125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.591136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.591535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.591545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.591834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.591844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.592210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.592221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.592570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.592580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.592802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.592813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.593061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.593072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.593358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.593368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.593703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.593713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.593912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.593923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.594109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.594119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.594350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.594360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.594678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.594688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.594999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.595010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.595332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.595342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.595655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.595665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.596001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.596012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.596347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.596357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.596591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.596602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.596905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.596916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.597296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.597306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.597502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.597512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.597818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.597831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.598153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.598163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.598479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.598489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.130 qpair failed and we were unable to recover it. 00:30:31.130 [2024-11-20 07:31:05.598788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.130 [2024-11-20 07:31:05.598798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.599079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.599090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.599425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.599436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.599774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.599784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.600115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.600126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.600458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.600469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.600851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.600869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.601228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.601238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.601550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.601561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.601868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.601879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.602117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.602127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.602475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.602486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.602709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.602720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.602908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.602920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.603324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.603335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.603641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.603651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.603674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.131 [2024-11-20 07:31:05.603696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.131 [2024-11-20 07:31:05.603702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.131 [2024-11-20 07:31:05.603708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.131 [2024-11-20 07:31:05.603712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.131 [2024-11-20 07:31:05.603868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.603880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.604275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.604286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.604685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.604695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.605014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.605025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.605235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.605245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.605161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:31.131 [2024-11-20 07:31:05.605301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:31.131 [2024-11-20 07:31:05.605458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:31.131 [2024-11-20 07:31:05.605460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:31.131 [2024-11-20 07:31:05.605717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.605729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.605921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.605933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.606142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.606152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.606409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.606419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.606761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.606771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.606986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.606998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.607379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.607389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.607736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.607746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.608126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.608137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.608501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.608511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.608818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.131 [2024-11-20 07:31:05.608827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.131 qpair failed and we were unable to recover it. 00:30:31.131 [2024-11-20 07:31:05.609228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.609240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.609550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.609561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.609749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.609761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.609978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.609989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.610076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.610087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.610378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.610389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.610737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.610748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.611050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.611061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.611368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.611379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.611709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.611719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.611899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.611912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.612230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.612241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.612460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.612471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.612828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.612839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.613102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.613113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.613508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.613518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.613837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.613849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.614088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.614098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.614429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.614440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.614701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.614712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.615048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.615059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.615130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.615140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.615450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.615460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.615817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.615827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.616170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.616181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.616538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.616549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.616768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.616778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.617037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.617048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.617397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.617408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.617765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.617775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.618137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.618149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.618363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.618374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.618694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.618704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.618982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.618993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.619324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.619334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.619684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.619694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.620001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.620012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.132 [2024-11-20 07:31:05.620315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.132 [2024-11-20 07:31:05.620326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.132 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.620659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.620670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.620868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.620880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.621148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.621159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.621368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.621378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.621550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.621560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.621920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.621935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.622139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.622149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.622564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.622575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.622837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.622848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.623201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.623213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.623414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.623424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.623760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.623772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.624154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.624166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.624365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.624376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.624698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.624709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.624892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.624904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.625239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.625250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.625554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.625565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.625761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.625772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.626020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.626032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.626343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.626353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.626546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.626558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.626876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.626888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.627204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.627216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.627495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.627505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.627815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.627826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.628114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.628126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.628316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.628328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.628690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.628701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.629091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.629103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.629393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.629404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.133 [2024-11-20 07:31:05.629742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.133 [2024-11-20 07:31:05.629752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.133 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.630007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.630019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.630360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.630371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.630712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.630724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.630918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.630929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.631141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.631151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.631464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.631474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.631826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.631837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.632183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.632194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.632529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.632539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.632835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.632846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.633054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.633065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.633407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.633417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.633468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.633477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.633642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.633653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.634017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.634029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.634329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.634339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.634636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.634646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.634846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.634857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.635047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.635058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.635386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.635396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.635682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.635693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.635994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.636005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.636366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.636377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.636566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.636577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.636917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.636929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.637127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.637138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.637478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.637488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.637684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.637695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.638033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.638044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.638382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.638393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.638569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.638579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.638902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.638913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.639122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.639132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.639432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.639442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.639602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.639612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.639824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.639834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.640208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.640219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.134 [2024-11-20 07:31:05.640420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.134 [2024-11-20 07:31:05.640430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.134 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.640487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.640497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.640815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.640825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.641174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.641184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.641515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.641527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.641724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.641733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.642008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.642019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.642346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.642357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.642543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.642553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.642857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.642871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.643196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.643205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.643533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.643544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.643736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.643746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.643952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.643962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.644281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.644291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.644632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.644643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.644870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.644881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.645205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.645215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.645530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.645540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.645878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.645889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.646228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.646238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.646427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.646437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.646614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.646624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.646680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.646689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.647019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.647031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.647379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.647389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.647550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.647559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.647886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.647896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.648224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.648233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.648587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.648597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.648939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.648950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.649002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.649014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.649214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.649224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.649547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.649557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.649617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.649627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.649812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.649823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.650197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.650208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.650506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.650515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.135 [2024-11-20 07:31:05.650913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.135 [2024-11-20 07:31:05.650924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.135 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.651302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.651312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.651489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.651498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.651685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.651697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.652033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.652043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.652292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.652301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.652490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.652499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.652854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.652869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.653066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.653077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.653454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.653464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.653652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.653661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.654002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.654012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.654355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.654365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.654686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.654696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.655004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.655014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.655314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.655325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.655649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.655659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.655860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.655876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.656182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.656192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.656418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.656427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.656641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.656651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.657030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.657040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.657083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.657092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.657358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.657368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.657539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.657550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.657857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.657870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.657925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.657934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.658134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.658144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.658336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.658346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.658663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.658673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.659054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.659064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.659403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.659412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.659746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.659756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.659964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.659974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.660306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.660316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.660614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.660624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.660807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.660819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.136 qpair failed and we were unable to recover it. 00:30:31.136 [2024-11-20 07:31:05.660986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.136 [2024-11-20 07:31:05.660996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.661216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.661225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.661402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.661412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.661701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.661711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.661913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.661923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.662247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.662257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.662438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.662448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.662821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.662830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.663141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.663152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.663501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.663511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.663685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.663695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.663871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.663882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.664067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.664076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.664299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.664309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.664646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.664655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.664953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.664963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.665149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.665158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.665453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.665463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.665693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.665704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.666039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.666050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.666096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.666105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.666447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.666456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.666636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.666645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.666944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.666954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.667295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.667307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.667628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.667639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.667956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.667966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.668054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.668064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.668420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.668429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.668622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.668632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.668823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.668833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.669131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.669143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.669345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.669355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.669671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.669681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.669885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.669896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.669970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.669979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.670285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.670295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.670727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.137 [2024-11-20 07:31:05.670738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.137 qpair failed and we were unable to recover it. 00:30:31.137 [2024-11-20 07:31:05.671050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.671060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.671368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.671378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.671651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.671661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.671826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.671836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.672181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.672192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.672367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.672377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.672666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.672676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.672879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.672890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.673081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.673093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.673397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.673407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.673726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.673736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.674099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.674110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.674428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.674439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.674742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.674754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.674926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.674938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.675131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.675140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.675449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.675459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.675778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.675788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.676107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.676117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.676503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.676513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.676727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.676737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.677020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.677030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.677369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.677379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.677682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.677692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.677906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.677917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.678087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.678096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.678294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.678304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.678627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.678639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.678941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.678952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.679158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.679168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.679346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.679355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.679562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.679571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.679865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.679876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.138 [2024-11-20 07:31:05.680178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.138 [2024-11-20 07:31:05.680188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.138 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.680405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.680414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.680725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.680735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.681036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.681047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.681252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.681263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.681546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.681555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.681906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.681917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.682224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.682235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.682526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.682535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.682860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.682875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.683085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.683095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.683464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.683474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.683798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.683807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.684146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.684157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.684471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.684481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.684663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.684674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.684874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.684885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.685232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.685242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.685425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.685435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.685597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.685606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.685898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.685908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.686096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.686108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.686271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.686281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.686624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.686635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.686855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.686876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.686961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.686971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.687213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.687222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.687523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.687532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.687655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.687665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.687732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.687742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.688030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.688040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.688383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.688393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.688740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.688749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.689056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.689068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.689375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.689384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.689575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.689585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.690011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.690023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.690319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.690329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.139 [2024-11-20 07:31:05.690369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.139 [2024-11-20 07:31:05.690378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.139 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.690547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.690557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.690907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.690917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.691146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.691157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.691455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.691464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.691637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.691647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.691841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.691851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.692239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.692250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.692428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.692439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.692638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.692648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.692837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.692848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.692933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.692944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:31.140 [2024-11-20 07:31:05.693143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.693154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.693327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.693337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:30:31.140 [2024-11-20 07:31:05.693565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.693575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.693766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.693777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:31.140 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:31.140 [2024-11-20 07:31:05.694194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.694206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.140 [2024-11-20 07:31:05.694557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.694567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.694909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.694920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.695223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.695234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.695474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.695485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.695801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.695811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.696179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.696190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.696500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.696510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.696558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.696567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.696917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.696928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.697234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.697244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.697642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.697652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.697820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.697830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.698037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.698048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.698292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.698303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.698605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.698616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.698900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.698911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.699105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.699115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.699400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.699410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.699761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.699771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.140 [2024-11-20 07:31:05.700194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.140 [2024-11-20 07:31:05.700204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.140 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.700534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.700544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.700759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.700770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.700855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.700877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.701226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.701236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.701564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.701575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.701752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.701763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.702075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.702086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.702441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.702452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.702789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.702799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.703045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.703056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.703300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.703310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.703370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.703380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.703684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.703693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.703779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.703788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.704131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.704142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.704438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.704448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.704761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.704771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.705247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.705258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.705575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.705586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.705926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.705936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.706001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.706010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.706310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.706321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.706646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.706657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.706988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.707000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.707183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.707194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.707540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.707553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.707674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.707685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.707801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.707811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.708123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.708133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.708211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.708221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.708512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.708522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.708723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.708733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.708929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.708940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.709198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.709213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.709544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.709554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.709876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.709887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.710259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.710270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.141 [2024-11-20 07:31:05.710574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.141 [2024-11-20 07:31:05.710584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.141 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.710756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.710765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.710957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.710968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.711355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.711365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.711659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.711669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.712001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.712012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.712058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.712067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.712383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.712394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.712592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.712602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.712820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.712830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.712996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.713007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.713342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.713352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.713562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.713572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.713750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.713759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.713805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.713814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.714132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.714145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.714471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.714482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.714781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.714792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.715225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.715236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.715566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.715577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.715690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.715702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.716044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.716055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.716237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.716247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.716649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.716658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.716989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.717001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.717385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.717396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.717717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.717726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.718037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.718047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.718386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.718396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.718720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.718731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.718880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.718891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.719240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.719250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.719446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.719456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.719649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.719660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.719709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.719718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.142 [2024-11-20 07:31:05.719930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-11-20 07:31:05.719940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.142 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.720056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.720066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.720401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.720411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.720726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.720736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.721060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.721078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.721393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.721404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.721709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.721720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.722007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.722018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.722367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.722378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.722788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.722799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.723118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.723130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.723459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.723470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.723543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.723552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.723711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.723722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.723798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.723808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.724115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.724127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.724313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.724323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.724616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.724626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.725035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.725047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.725359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.725369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.725713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.725723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.726030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.726042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.726404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.726414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.726759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.726770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.727147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.727159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.727360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.727369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.727527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.727537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.727915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.727925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.728254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.728264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.728596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.728607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.728951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.728963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.729298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.729308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.729637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.729646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.729767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.729779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.730248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.730259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.730474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.730485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.730822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.730833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.143 [2024-11-20 07:31:05.731000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-11-20 07:31:05.731011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.143 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.731205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.731216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.731531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.731542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.731834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.731844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.732084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.732094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.732317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.732328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.732548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.732558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.732834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.732844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.733075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.733086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.733409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.733419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.733731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.733742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.733923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.733936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.144 [2024-11-20 07:31:05.734130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.734141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.734479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.734490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:31.144 [2024-11-20 07:31:05.734676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.734686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.144 [2024-11-20 07:31:05.734870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.734884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.144 [2024-11-20 07:31:05.735307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.735318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.735649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.735660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.735855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.735870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.736208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.736218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.736545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.736555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.736883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.736893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.737233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.737243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.737590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.737600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.737817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.737827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.737915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.737926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.738264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.738275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.738619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.738629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.738823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.738833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.739006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.739016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.739212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.739222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.739406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.739418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.739745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.739755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.739989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.740000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.740209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.740219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.740589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.740599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.144 [2024-11-20 07:31:05.740926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-11-20 07:31:05.740939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.144 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.741137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.741146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.741321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.741331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.741463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.741473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.741729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.741739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.742086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.742096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.742272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.742283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.742622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.742632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.742975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.742986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.743326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.743336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.743677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.743687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.743746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.743756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.744055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.744065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.744262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.744272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.744445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.744455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.744804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.744814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.744988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.744999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.745229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.745240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.745564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.745574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.745883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.745894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.746249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.746259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.746581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.746592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.746906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.746916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.747209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.747219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.747531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.747541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.747870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.747882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.748187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.748197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.748320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.748332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.748662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.748673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.748842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.748852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.749109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.749119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.749448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.749457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.749742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.749765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.750068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.750079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.750165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.750174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.750359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.750370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.750697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.750707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.750915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.750925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.145 qpair failed and we were unable to recover it. 00:30:31.145 [2024-11-20 07:31:05.751131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.145 [2024-11-20 07:31:05.751140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.751539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.751549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.751920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.751930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.752253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.752263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.752578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.752589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.752640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.752650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.752858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.752873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.752931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.752941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.753224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.753234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.753415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.753425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.753823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.753834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.754196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.754208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.754575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.754585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.754934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.754945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.755388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.755398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.755592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.755601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.755941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.755952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.756351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.756362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.756672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.756682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.757148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.757158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.757514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.757524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.757793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.757802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.757998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.758008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.758343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.758353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.758710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.758720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.758909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.758919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.759113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.759123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.759461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.759471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.759794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.759804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.759879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.759889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.759949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.759958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.760345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.760354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.760577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.760587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.760761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.760771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.761144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.761155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.761480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.761490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.761673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.146 [2024-11-20 07:31:05.761683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.146 qpair failed and we were unable to recover it. 00:30:31.146 [2024-11-20 07:31:05.761848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.761857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.762176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.762186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.762483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.762494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.762615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.762625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.762968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.762979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.763355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.763365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.763646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.763656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.763829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.763839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.764200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.764210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.764374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.764384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.764590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.764600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.764894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.764905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.765141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.765153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.765451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.765461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.765639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.765651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.765855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.765873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.766169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.766180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.766351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.766361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.766764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.766775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.766983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.766994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.767069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.767080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.767407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.767419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.767598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.767608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.767891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.767903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.768214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.768225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.768547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.768558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.768871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.768881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.769065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.769075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.769344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.769356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.769719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.769730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.770056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.770067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.770284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.770294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.770489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.147 [2024-11-20 07:31:05.770499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.147 qpair failed and we were unable to recover it. 00:30:31.147 [2024-11-20 07:31:05.770881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.770892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.771233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.771243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.771560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.771572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.771776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.771787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.772135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.772147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.772201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.772210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.772427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.772438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.772726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.772737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.772934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.772945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.773138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.773148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.773445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.773455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.773631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.773642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.773828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.773839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.774044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.774054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.774392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.774404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.774472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.774483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.774819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.774830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 Malloc0 00:30:31.148 [2024-11-20 07:31:05.775023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.775034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.775334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.775344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.148 [2024-11-20 07:31:05.775672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.775682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:31.148 [2024-11-20 07:31:05.776086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.776097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.148 [2024-11-20 07:31:05.776440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.776450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.148 [2024-11-20 07:31:05.776659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.776669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.776959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.776969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.777178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.777189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.777568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.777578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.777899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.777910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.778266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.778276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.778463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.778473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.778724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.778736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.779105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.779116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.779313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.779323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.779520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.779530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.779851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.779861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.780225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.780235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.780572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.148 [2024-11-20 07:31:05.780582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.148 qpair failed and we were unable to recover it. 00:30:31.148 [2024-11-20 07:31:05.780874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.780885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.781209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.781221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.781385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.781395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.781571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.781581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.781792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.781804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.782115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.782126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.782294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.782304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.782348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.782358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.782402] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.149 [2024-11-20 07:31:05.782660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.782670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.782840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.782850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.783170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.783181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.783515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.783525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.783817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.783827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.784023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.784035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.784227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.784237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.784663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.784673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.784777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.784787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.785146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.785157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.785492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.785503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.785819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.785829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.786035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.786045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.786242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.786252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.786624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.786634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.786813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.786823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.787076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.787087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.787453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.787463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.787798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.787809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.788176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.788187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.788420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.788429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.788750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.788760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.789097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.789108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.789290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.789301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.789637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.789647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.789994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.790005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.790343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.790354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.790695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.790705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.790903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.790913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.149 qpair failed and we were unable to recover it. 00:30:31.149 [2024-11-20 07:31:05.791225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.149 [2024-11-20 07:31:05.791235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.150 [2024-11-20 07:31:05.791418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.791428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:31.150 [2024-11-20 07:31:05.791712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.791722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.150 [2024-11-20 07:31:05.792101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.792113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.150 [2024-11-20 07:31:05.792458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.792469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.792649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.792660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.792957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.792968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.793220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.793230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.793521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.793531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.793856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.793872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.794049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.794059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.794449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.794459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.794779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.794789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.795014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.795024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.795337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.795347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.795685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.795695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.795879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.795889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.796169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.796179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.796362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.796371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.796717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.796729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.796937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.796948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.797147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.797158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.797386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.797395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.797746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.797756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.798113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.798124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.798460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.798470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.798764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.798774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.799014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.799025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.799368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.799378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.799701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.799711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.799897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.799909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.800274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.800284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.800534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.800548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.800850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.800861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.150 [2024-11-20 07:31:05.801238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.150 [2024-11-20 07:31:05.801248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.150 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.801578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.801588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.801906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.801917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.802260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.802270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.802562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.802572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.802947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.802958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.803249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.803260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.151 [2024-11-20 07:31:05.803469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.803479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.803659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.803669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:31.151 [2024-11-20 07:31:05.804003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.804014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.151 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.151 [2024-11-20 07:31:05.804368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.804378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.804726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.804736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.804886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.804896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.805119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.805129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.805429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.805440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.805752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.805762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.806083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.806093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.806283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.806294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.806559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.806570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.806755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.806766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.807148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.807159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.807337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.807349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.807632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.807642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.807833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.807843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.808158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.808169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.808357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.808367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.808707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.808717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.809043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.151 [2024-11-20 07:31:05.809053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.151 qpair failed and we were unable to recover it. 00:30:31.151 [2024-11-20 07:31:05.809408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.809418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.809756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.809766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.809961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.809971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.810180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.810189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.810244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.810254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.810409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.810419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.810734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.810744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.811043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.811053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.811243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.811252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.811590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.811600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.811924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.811935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.812130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.812141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.812372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.812382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.812719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.812730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.812782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.812793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.813119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.813134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.813438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.813449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.813782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.813792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.814119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.814130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.814420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.814430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.814614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.814624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.814810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.152 [2024-11-20 07:31:05.814820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.152 qpair failed and we were unable to recover it. 00:30:31.152 [2024-11-20 07:31:05.815110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.815121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.153 [2024-11-20 07:31:05.815425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.815436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.815566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.815577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.153 [2024-11-20 07:31:05.815930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.815940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.153 [2024-11-20 07:31:05.816249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.816260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.153 [2024-11-20 07:31:05.816567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.816578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.816876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.816886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.817307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.817317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.817630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.817640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.817956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.817967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.818132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.818141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.818472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.818482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.818793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.818806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.819166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.819177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.819522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.819531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.819829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.819839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.820123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.820133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.820319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.820329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.820627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.820637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.820828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.820840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.821190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.821201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.821549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.821559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.821813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.821823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.822165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.822176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.153 [2024-11-20 07:31:05.822460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.153 [2024-11-20 07:31:05.822470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e95490 with addr=10.0.0.2, port=4420 00:30:31.153 qpair failed and we were unable to recover it. 00:30:31.154 [2024-11-20 07:31:05.822665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.154 [2024-11-20 07:31:05.823467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.154 [2024-11-20 07:31:05.823576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.154 [2024-11-20 07:31:05.823596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.154 [2024-11-20 07:31:05.823604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.154 [2024-11-20 07:31:05.823611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.154 [2024-11-20 07:31:05.823632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.154 qpair failed and we were unable to recover it. 00:30:31.154 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.154 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:31.154 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.154 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.154 [2024-11-20 07:31:05.833213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.154 [2024-11-20 07:31:05.833273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.154 [2024-11-20 07:31:05.833288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.154 [2024-11-20 07:31:05.833295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.154 [2024-11-20 07:31:05.833301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.154 [2024-11-20 07:31:05.833317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.154 qpair failed and we were unable to recover it. 00:30:31.154 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.154 07:31:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1487993 00:30:31.154 [2024-11-20 07:31:05.843270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.154 [2024-11-20 07:31:05.843334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.154 [2024-11-20 07:31:05.843349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.154 [2024-11-20 07:31:05.843356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.154 [2024-11-20 07:31:05.843362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.154 [2024-11-20 07:31:05.843377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.154 qpair failed and we were unable to recover it. 00:30:31.154 [2024-11-20 07:31:05.853281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.154 [2024-11-20 07:31:05.853341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.154 [2024-11-20 07:31:05.853355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.154 [2024-11-20 07:31:05.853362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.154 [2024-11-20 07:31:05.853372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.154 [2024-11-20 07:31:05.853386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.154 qpair failed and we were unable to recover it. 00:30:31.154 [2024-11-20 07:31:05.863249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.154 [2024-11-20 07:31:05.863330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.154 [2024-11-20 07:31:05.863343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.154 [2024-11-20 07:31:05.863350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.154 [2024-11-20 07:31:05.863356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.154 [2024-11-20 07:31:05.863370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.154 qpair failed and we were unable to recover it. 00:30:31.154 [2024-11-20 07:31:05.873122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.154 [2024-11-20 07:31:05.873179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.154 [2024-11-20 07:31:05.873193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.154 [2024-11-20 07:31:05.873200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.154 [2024-11-20 07:31:05.873206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.154 [2024-11-20 07:31:05.873219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.154 qpair failed and we were unable to recover it. 00:30:31.417 [2024-11-20 07:31:05.883218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.417 [2024-11-20 07:31:05.883307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.417 [2024-11-20 07:31:05.883320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.417 [2024-11-20 07:31:05.883327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.417 [2024-11-20 07:31:05.883334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.417 [2024-11-20 07:31:05.883347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.417 qpair failed and we were unable to recover it. 00:30:31.417 [2024-11-20 07:31:05.893223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.417 [2024-11-20 07:31:05.893281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.417 [2024-11-20 07:31:05.893295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.417 [2024-11-20 07:31:05.893302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.417 [2024-11-20 07:31:05.893308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.417 [2024-11-20 07:31:05.893321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.417 qpair failed and we were unable to recover it. 00:30:31.417 [2024-11-20 07:31:05.903400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.417 [2024-11-20 07:31:05.903455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.417 [2024-11-20 07:31:05.903469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.417 [2024-11-20 07:31:05.903476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.417 [2024-11-20 07:31:05.903482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.417 [2024-11-20 07:31:05.903495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.417 qpair failed and we were unable to recover it. 00:30:31.417 [2024-11-20 07:31:05.913369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.417 [2024-11-20 07:31:05.913422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.417 [2024-11-20 07:31:05.913435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.417 [2024-11-20 07:31:05.913443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.417 [2024-11-20 07:31:05.913449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.417 [2024-11-20 07:31:05.913462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.417 qpair failed and we were unable to recover it. 00:30:31.417 [2024-11-20 07:31:05.923310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.417 [2024-11-20 07:31:05.923395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.417 [2024-11-20 07:31:05.923409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.417 [2024-11-20 07:31:05.923416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.417 [2024-11-20 07:31:05.923422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.417 [2024-11-20 07:31:05.923435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.417 qpair failed and we were unable to recover it. 00:30:31.417 [2024-11-20 07:31:05.933446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.417 [2024-11-20 07:31:05.933504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.417 [2024-11-20 07:31:05.933517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.417 [2024-11-20 07:31:05.933524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.417 [2024-11-20 07:31:05.933531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.417 [2024-11-20 07:31:05.933544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.417 qpair failed and we were unable to recover it. 00:30:31.418 [2024-11-20 07:31:05.943384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.418 [2024-11-20 07:31:05.943443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.418 [2024-11-20 07:31:05.943463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.418 [2024-11-20 07:31:05.943470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.418 [2024-11-20 07:31:05.943477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.418 [2024-11-20 07:31:05.943491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.418 qpair failed and we were unable to recover it. 00:30:31.418 [2024-11-20 07:31:05.953510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.418 [2024-11-20 07:31:05.953576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.418 [2024-11-20 07:31:05.953597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.418 [2024-11-20 07:31:05.953604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.418 [2024-11-20 07:31:05.953611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.418 [2024-11-20 07:31:05.953628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.418 qpair failed and we were unable to recover it. 00:30:31.418 [2024-11-20 07:31:05.963511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.418 [2024-11-20 07:31:05.963566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.418 [2024-11-20 07:31:05.963580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.418 [2024-11-20 07:31:05.963587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.418 [2024-11-20 07:31:05.963593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.418 [2024-11-20 07:31:05.963607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.418 qpair failed and we were unable to recover it. 00:30:31.418 [2024-11-20 07:31:05.973562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.418 [2024-11-20 07:31:05.973615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.418 [2024-11-20 07:31:05.973629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.418 [2024-11-20 07:31:05.973636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.418 [2024-11-20 07:31:05.973643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.418 [2024-11-20 07:31:05.973656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.418 qpair failed and we were unable to recover it. 00:30:31.418 [2024-11-20 07:31:05.983480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.418 [2024-11-20 07:31:05.983542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.418 [2024-11-20 07:31:05.983556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.418 [2024-11-20 07:31:05.983563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.418 [2024-11-20 07:31:05.983572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.418 [2024-11-20 07:31:05.983586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.418 qpair failed and we were unable to recover it. 00:30:31.418 [2024-11-20 07:31:05.993615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.418 [2024-11-20 07:31:05.993679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.418 [2024-11-20 07:31:05.993705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.418 [2024-11-20 07:31:05.993714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.418 [2024-11-20 07:31:05.993721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.418 [2024-11-20 07:31:05.993741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.418 qpair failed and we were unable to recover it. 00:30:31.418 [2024-11-20 07:31:06.003692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.418 [2024-11-20 07:31:06.003765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.418 [2024-11-20 07:31:06.003780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.418 [2024-11-20 07:31:06.003787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.418 [2024-11-20 07:31:06.003794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.418 [2024-11-20 07:31:06.003809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.418 qpair failed and we were unable to recover it. 00:30:31.418 [2024-11-20 07:31:06.013681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.418 [2024-11-20 07:31:06.013739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.418 [2024-11-20 07:31:06.013753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.418 [2024-11-20 07:31:06.013760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.418 [2024-11-20 07:31:06.013766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.418 [2024-11-20 07:31:06.013780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.418 qpair failed and we were unable to recover it. 00:30:31.418 [2024-11-20 07:31:06.023714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.418 [2024-11-20 07:31:06.023779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.418 [2024-11-20 07:31:06.023793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.418 [2024-11-20 07:31:06.023800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.418 [2024-11-20 07:31:06.023806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.418 [2024-11-20 07:31:06.023819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.418 qpair failed and we were unable to recover it. 00:30:31.418 [2024-11-20 07:31:06.033705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.418 [2024-11-20 07:31:06.033757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.418 [2024-11-20 07:31:06.033771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.418 [2024-11-20 07:31:06.033778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.418 [2024-11-20 07:31:06.033784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.418 [2024-11-20 07:31:06.033798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.418 qpair failed and we were unable to recover it. 00:30:31.418 [2024-11-20 07:31:06.043745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.418 [2024-11-20 07:31:06.043799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.418 [2024-11-20 07:31:06.043813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.418 [2024-11-20 07:31:06.043820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.418 [2024-11-20 07:31:06.043827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.418 [2024-11-20 07:31:06.043840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.418 qpair failed and we were unable to recover it. 00:30:31.418 [2024-11-20 07:31:06.053782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.418 [2024-11-20 07:31:06.053839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.418 [2024-11-20 07:31:06.053853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.418 [2024-11-20 07:31:06.053859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.418 [2024-11-20 07:31:06.053871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.418 [2024-11-20 07:31:06.053885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.418 qpair failed and we were unable to recover it. 00:30:31.418 [2024-11-20 07:31:06.063786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.418 [2024-11-20 07:31:06.063842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.418 [2024-11-20 07:31:06.063855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.418 [2024-11-20 07:31:06.063867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.418 [2024-11-20 07:31:06.063874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.418 [2024-11-20 07:31:06.063888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.418 qpair failed and we were unable to recover it. 00:30:31.419 [2024-11-20 07:31:06.073785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.419 [2024-11-20 07:31:06.073845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.419 [2024-11-20 07:31:06.073867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.419 [2024-11-20 07:31:06.073874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.419 [2024-11-20 07:31:06.073881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.419 [2024-11-20 07:31:06.073894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.419 qpair failed and we were unable to recover it. 00:30:31.419 [2024-11-20 07:31:06.083718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.419 [2024-11-20 07:31:06.083773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.419 [2024-11-20 07:31:06.083787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.419 [2024-11-20 07:31:06.083793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.419 [2024-11-20 07:31:06.083800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.419 [2024-11-20 07:31:06.083813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.419 qpair failed and we were unable to recover it. 00:30:31.419 [2024-11-20 07:31:06.093759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.419 [2024-11-20 07:31:06.093826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.419 [2024-11-20 07:31:06.093839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.419 [2024-11-20 07:31:06.093846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.419 [2024-11-20 07:31:06.093852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.419 [2024-11-20 07:31:06.093871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.419 qpair failed and we were unable to recover it. 00:30:31.419 [2024-11-20 07:31:06.103922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.419 [2024-11-20 07:31:06.103981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.419 [2024-11-20 07:31:06.103994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.419 [2024-11-20 07:31:06.104001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.419 [2024-11-20 07:31:06.104008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.419 [2024-11-20 07:31:06.104021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.419 qpair failed and we were unable to recover it. 00:30:31.419 [2024-11-20 07:31:06.113940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.419 [2024-11-20 07:31:06.114000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.419 [2024-11-20 07:31:06.114013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.419 [2024-11-20 07:31:06.114020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.419 [2024-11-20 07:31:06.114029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.419 [2024-11-20 07:31:06.114043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.419 qpair failed and we were unable to recover it. 00:30:31.419 [2024-11-20 07:31:06.123850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.419 [2024-11-20 07:31:06.123912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.419 [2024-11-20 07:31:06.123926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.419 [2024-11-20 07:31:06.123933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.419 [2024-11-20 07:31:06.123939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.419 [2024-11-20 07:31:06.123952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.419 qpair failed and we were unable to recover it. 00:30:31.419 [2024-11-20 07:31:06.134015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.419 [2024-11-20 07:31:06.134295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.419 [2024-11-20 07:31:06.134309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.419 [2024-11-20 07:31:06.134316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.419 [2024-11-20 07:31:06.134323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.419 [2024-11-20 07:31:06.134336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.419 qpair failed and we were unable to recover it. 00:30:31.419 [2024-11-20 07:31:06.144059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.419 [2024-11-20 07:31:06.144120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.419 [2024-11-20 07:31:06.144133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.419 [2024-11-20 07:31:06.144140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.419 [2024-11-20 07:31:06.144147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.419 [2024-11-20 07:31:06.144160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.419 qpair failed and we were unable to recover it. 00:30:31.419 [2024-11-20 07:31:06.154052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.419 [2024-11-20 07:31:06.154108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.419 [2024-11-20 07:31:06.154122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.419 [2024-11-20 07:31:06.154128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.419 [2024-11-20 07:31:06.154135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.419 [2024-11-20 07:31:06.154149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.419 qpair failed and we were unable to recover it. 00:30:31.419 [2024-11-20 07:31:06.164109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.419 [2024-11-20 07:31:06.164175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.419 [2024-11-20 07:31:06.164189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.419 [2024-11-20 07:31:06.164196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.419 [2024-11-20 07:31:06.164202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.419 [2024-11-20 07:31:06.164215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.419 qpair failed and we were unable to recover it. 00:30:31.419 [2024-11-20 07:31:06.174114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.419 [2024-11-20 07:31:06.174177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.419 [2024-11-20 07:31:06.174190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.419 [2024-11-20 07:31:06.174197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.419 [2024-11-20 07:31:06.174204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.419 [2024-11-20 07:31:06.174217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.419 qpair failed and we were unable to recover it. 00:30:31.682 [2024-11-20 07:31:06.184161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.682 [2024-11-20 07:31:06.184220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.682 [2024-11-20 07:31:06.184233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.682 [2024-11-20 07:31:06.184240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.682 [2024-11-20 07:31:06.184246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.682 [2024-11-20 07:31:06.184259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.682 qpair failed and we were unable to recover it. 00:30:31.682 [2024-11-20 07:31:06.194140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.682 [2024-11-20 07:31:06.194190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.682 [2024-11-20 07:31:06.194203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.682 [2024-11-20 07:31:06.194210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.682 [2024-11-20 07:31:06.194217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.682 [2024-11-20 07:31:06.194230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.682 qpair failed and we were unable to recover it. 00:30:31.682 [2024-11-20 07:31:06.204188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.682 [2024-11-20 07:31:06.204245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.682 [2024-11-20 07:31:06.204262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.682 [2024-11-20 07:31:06.204270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.682 [2024-11-20 07:31:06.204277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.682 [2024-11-20 07:31:06.204291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.682 qpair failed and we were unable to recover it. 00:30:31.682 [2024-11-20 07:31:06.214109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.682 [2024-11-20 07:31:06.214172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.682 [2024-11-20 07:31:06.214186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.682 [2024-11-20 07:31:06.214193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.682 [2024-11-20 07:31:06.214199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.682 [2024-11-20 07:31:06.214212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.682 qpair failed and we were unable to recover it. 00:30:31.682 [2024-11-20 07:31:06.224268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.682 [2024-11-20 07:31:06.224354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.682 [2024-11-20 07:31:06.224367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.682 [2024-11-20 07:31:06.224374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.682 [2024-11-20 07:31:06.224380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.682 [2024-11-20 07:31:06.224394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.682 qpair failed and we were unable to recover it. 00:30:31.682 [2024-11-20 07:31:06.234177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.682 [2024-11-20 07:31:06.234282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.682 [2024-11-20 07:31:06.234295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.682 [2024-11-20 07:31:06.234303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.682 [2024-11-20 07:31:06.234309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.682 [2024-11-20 07:31:06.234322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.682 qpair failed and we were unable to recover it. 00:30:31.682 [2024-11-20 07:31:06.244189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.682 [2024-11-20 07:31:06.244268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.682 [2024-11-20 07:31:06.244282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.682 [2024-11-20 07:31:06.244289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.682 [2024-11-20 07:31:06.244298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.682 [2024-11-20 07:31:06.244312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.682 qpair failed and we were unable to recover it. 00:30:31.683 [2024-11-20 07:31:06.254333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.683 [2024-11-20 07:31:06.254392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.683 [2024-11-20 07:31:06.254406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.683 [2024-11-20 07:31:06.254413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.683 [2024-11-20 07:31:06.254420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.683 [2024-11-20 07:31:06.254434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.683 qpair failed and we were unable to recover it. 00:30:31.683 [2024-11-20 07:31:06.264372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.683 [2024-11-20 07:31:06.264451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.683 [2024-11-20 07:31:06.264465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.683 [2024-11-20 07:31:06.264472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.683 [2024-11-20 07:31:06.264478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.683 [2024-11-20 07:31:06.264491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.683 qpair failed and we were unable to recover it. 00:30:31.683 [2024-11-20 07:31:06.274374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.683 [2024-11-20 07:31:06.274423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.683 [2024-11-20 07:31:06.274436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.683 [2024-11-20 07:31:06.274443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.683 [2024-11-20 07:31:06.274449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.683 [2024-11-20 07:31:06.274462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.683 qpair failed and we were unable to recover it. 00:30:31.683 [2024-11-20 07:31:06.284396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.683 [2024-11-20 07:31:06.284449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.683 [2024-11-20 07:31:06.284463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.683 [2024-11-20 07:31:06.284470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.683 [2024-11-20 07:31:06.284476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.683 [2024-11-20 07:31:06.284489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.683 qpair failed and we were unable to recover it. 00:30:31.683 [2024-11-20 07:31:06.294440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.683 [2024-11-20 07:31:06.294500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.683 [2024-11-20 07:31:06.294514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.683 [2024-11-20 07:31:06.294521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.683 [2024-11-20 07:31:06.294527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.683 [2024-11-20 07:31:06.294541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.683 qpair failed and we were unable to recover it. 00:30:31.683 [2024-11-20 07:31:06.304492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.683 [2024-11-20 07:31:06.304579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.683 [2024-11-20 07:31:06.304592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.683 [2024-11-20 07:31:06.304599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.683 [2024-11-20 07:31:06.304605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.683 [2024-11-20 07:31:06.304619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.683 qpair failed and we were unable to recover it. 00:30:31.683 [2024-11-20 07:31:06.314511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.683 [2024-11-20 07:31:06.314558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.683 [2024-11-20 07:31:06.314572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.683 [2024-11-20 07:31:06.314580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.683 [2024-11-20 07:31:06.314588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.683 [2024-11-20 07:31:06.314602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.683 qpair failed and we were unable to recover it. 00:30:31.683 [2024-11-20 07:31:06.324534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.683 [2024-11-20 07:31:06.324592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.683 [2024-11-20 07:31:06.324605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.683 [2024-11-20 07:31:06.324613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.683 [2024-11-20 07:31:06.324620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.683 [2024-11-20 07:31:06.324633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.683 qpair failed and we were unable to recover it. 00:30:31.683 [2024-11-20 07:31:06.334474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.683 [2024-11-20 07:31:06.334528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.683 [2024-11-20 07:31:06.334545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.683 [2024-11-20 07:31:06.334552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.683 [2024-11-20 07:31:06.334559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.683 [2024-11-20 07:31:06.334572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.683 qpair failed and we were unable to recover it. 00:30:31.683 [2024-11-20 07:31:06.344607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.683 [2024-11-20 07:31:06.344678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.683 [2024-11-20 07:31:06.344692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.683 [2024-11-20 07:31:06.344699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.683 [2024-11-20 07:31:06.344705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.683 [2024-11-20 07:31:06.344719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.683 qpair failed and we were unable to recover it. 00:30:31.683 [2024-11-20 07:31:06.354630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.683 [2024-11-20 07:31:06.354690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.683 [2024-11-20 07:31:06.354703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.683 [2024-11-20 07:31:06.354710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.683 [2024-11-20 07:31:06.354717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.683 [2024-11-20 07:31:06.354730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.683 qpair failed and we were unable to recover it. 00:30:31.683 [2024-11-20 07:31:06.364656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.683 [2024-11-20 07:31:06.364723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.683 [2024-11-20 07:31:06.364737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.683 [2024-11-20 07:31:06.364744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.683 [2024-11-20 07:31:06.364750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.683 [2024-11-20 07:31:06.364764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.683 qpair failed and we were unable to recover it. 00:30:31.683 [2024-11-20 07:31:06.374561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.683 [2024-11-20 07:31:06.374619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.683 [2024-11-20 07:31:06.374632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.683 [2024-11-20 07:31:06.374639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.683 [2024-11-20 07:31:06.374649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.683 [2024-11-20 07:31:06.374662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.684 qpair failed and we were unable to recover it. 00:30:31.684 [2024-11-20 07:31:06.384683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.684 [2024-11-20 07:31:06.384739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.684 [2024-11-20 07:31:06.384753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.684 [2024-11-20 07:31:06.384760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.684 [2024-11-20 07:31:06.384766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.684 [2024-11-20 07:31:06.384779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.684 qpair failed and we were unable to recover it. 00:30:31.684 [2024-11-20 07:31:06.394770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.684 [2024-11-20 07:31:06.394824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.684 [2024-11-20 07:31:06.394837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.684 [2024-11-20 07:31:06.394844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.684 [2024-11-20 07:31:06.394850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.684 [2024-11-20 07:31:06.394868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.684 qpair failed and we were unable to recover it. 00:30:31.684 [2024-11-20 07:31:06.404797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.684 [2024-11-20 07:31:06.404873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.684 [2024-11-20 07:31:06.404887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.684 [2024-11-20 07:31:06.404893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.684 [2024-11-20 07:31:06.404899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.684 [2024-11-20 07:31:06.404914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.684 qpair failed and we were unable to recover it. 00:30:31.684 [2024-11-20 07:31:06.414789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.684 [2024-11-20 07:31:06.414846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.684 [2024-11-20 07:31:06.414859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.684 [2024-11-20 07:31:06.414870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.684 [2024-11-20 07:31:06.414876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.684 [2024-11-20 07:31:06.414890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.684 qpair failed and we were unable to recover it. 00:30:31.684 [2024-11-20 07:31:06.424824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.684 [2024-11-20 07:31:06.424885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.684 [2024-11-20 07:31:06.424898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.684 [2024-11-20 07:31:06.424905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.684 [2024-11-20 07:31:06.424911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.684 [2024-11-20 07:31:06.424926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.684 qpair failed and we were unable to recover it. 00:30:31.684 [2024-11-20 07:31:06.434844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.684 [2024-11-20 07:31:06.434901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.684 [2024-11-20 07:31:06.434915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.684 [2024-11-20 07:31:06.434921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.684 [2024-11-20 07:31:06.434928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.684 [2024-11-20 07:31:06.434942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.684 qpair failed and we were unable to recover it. 00:30:31.684 [2024-11-20 07:31:06.444873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.684 [2024-11-20 07:31:06.444956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.684 [2024-11-20 07:31:06.444969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.684 [2024-11-20 07:31:06.444976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.684 [2024-11-20 07:31:06.444983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.684 [2024-11-20 07:31:06.444996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.684 qpair failed and we were unable to recover it. 00:30:31.947 [2024-11-20 07:31:06.454921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.947 [2024-11-20 07:31:06.454979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.947 [2024-11-20 07:31:06.454993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.947 [2024-11-20 07:31:06.455000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.947 [2024-11-20 07:31:06.455007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.947 [2024-11-20 07:31:06.455020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-11-20 07:31:06.464945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.947 [2024-11-20 07:31:06.465005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.947 [2024-11-20 07:31:06.465022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.947 [2024-11-20 07:31:06.465029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.947 [2024-11-20 07:31:06.465035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.947 [2024-11-20 07:31:06.465049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.947 qpair failed and we were unable to recover it. 00:30:31.947 [2024-11-20 07:31:06.474966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.947 [2024-11-20 07:31:06.475019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.947 [2024-11-20 07:31:06.475032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.947 [2024-11-20 07:31:06.475039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.947 [2024-11-20 07:31:06.475046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.948 [2024-11-20 07:31:06.475059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-11-20 07:31:06.484984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.948 [2024-11-20 07:31:06.485034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.948 [2024-11-20 07:31:06.485048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.948 [2024-11-20 07:31:06.485055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.948 [2024-11-20 07:31:06.485061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.948 [2024-11-20 07:31:06.485075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-11-20 07:31:06.495022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.948 [2024-11-20 07:31:06.495078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.948 [2024-11-20 07:31:06.495092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.948 [2024-11-20 07:31:06.495099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.948 [2024-11-20 07:31:06.495105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.948 [2024-11-20 07:31:06.495119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-11-20 07:31:06.505014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.948 [2024-11-20 07:31:06.505067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.948 [2024-11-20 07:31:06.505081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.948 [2024-11-20 07:31:06.505088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.948 [2024-11-20 07:31:06.505097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.948 [2024-11-20 07:31:06.505111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-11-20 07:31:06.515099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.948 [2024-11-20 07:31:06.515151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.948 [2024-11-20 07:31:06.515165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.948 [2024-11-20 07:31:06.515171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.948 [2024-11-20 07:31:06.515178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.948 [2024-11-20 07:31:06.515191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-11-20 07:31:06.525027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.948 [2024-11-20 07:31:06.525085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.948 [2024-11-20 07:31:06.525099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.948 [2024-11-20 07:31:06.525105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.948 [2024-11-20 07:31:06.525112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.948 [2024-11-20 07:31:06.525125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-11-20 07:31:06.535154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.948 [2024-11-20 07:31:06.535210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.948 [2024-11-20 07:31:06.535224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.948 [2024-11-20 07:31:06.535231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.948 [2024-11-20 07:31:06.535237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.948 [2024-11-20 07:31:06.535250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-11-20 07:31:06.545190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.948 [2024-11-20 07:31:06.545270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.948 [2024-11-20 07:31:06.545283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.948 [2024-11-20 07:31:06.545290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.948 [2024-11-20 07:31:06.545296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.948 [2024-11-20 07:31:06.545310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-11-20 07:31:06.555166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.948 [2024-11-20 07:31:06.555222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.948 [2024-11-20 07:31:06.555235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.948 [2024-11-20 07:31:06.555242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.948 [2024-11-20 07:31:06.555248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.948 [2024-11-20 07:31:06.555261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-11-20 07:31:06.565239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.948 [2024-11-20 07:31:06.565291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.948 [2024-11-20 07:31:06.565305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.948 [2024-11-20 07:31:06.565312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.948 [2024-11-20 07:31:06.565318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.948 [2024-11-20 07:31:06.565331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-11-20 07:31:06.575261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.948 [2024-11-20 07:31:06.575345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.948 [2024-11-20 07:31:06.575358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.948 [2024-11-20 07:31:06.575365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.948 [2024-11-20 07:31:06.575371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.948 [2024-11-20 07:31:06.575384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-11-20 07:31:06.585313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.948 [2024-11-20 07:31:06.585372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.948 [2024-11-20 07:31:06.585385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.948 [2024-11-20 07:31:06.585392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.948 [2024-11-20 07:31:06.585398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.948 [2024-11-20 07:31:06.585411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-11-20 07:31:06.595302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.948 [2024-11-20 07:31:06.595353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.948 [2024-11-20 07:31:06.595370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.948 [2024-11-20 07:31:06.595377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.948 [2024-11-20 07:31:06.595383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.948 [2024-11-20 07:31:06.595396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.948 qpair failed and we were unable to recover it. 00:30:31.948 [2024-11-20 07:31:06.605217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.948 [2024-11-20 07:31:06.605277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.948 [2024-11-20 07:31:06.605290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.948 [2024-11-20 07:31:06.605297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.948 [2024-11-20 07:31:06.605304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.949 [2024-11-20 07:31:06.605317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-11-20 07:31:06.615445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.949 [2024-11-20 07:31:06.615514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.949 [2024-11-20 07:31:06.615527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.949 [2024-11-20 07:31:06.615534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.949 [2024-11-20 07:31:06.615540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.949 [2024-11-20 07:31:06.615553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-11-20 07:31:06.625337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.949 [2024-11-20 07:31:06.625397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.949 [2024-11-20 07:31:06.625410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.949 [2024-11-20 07:31:06.625417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.949 [2024-11-20 07:31:06.625423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.949 [2024-11-20 07:31:06.625437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-11-20 07:31:06.635468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.949 [2024-11-20 07:31:06.635517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.949 [2024-11-20 07:31:06.635531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.949 [2024-11-20 07:31:06.635541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.949 [2024-11-20 07:31:06.635548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.949 [2024-11-20 07:31:06.635562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-11-20 07:31:06.645543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.949 [2024-11-20 07:31:06.645602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.949 [2024-11-20 07:31:06.645617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.949 [2024-11-20 07:31:06.645624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.949 [2024-11-20 07:31:06.645630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.949 [2024-11-20 07:31:06.645647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-11-20 07:31:06.655493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.949 [2024-11-20 07:31:06.655545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.949 [2024-11-20 07:31:06.655560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.949 [2024-11-20 07:31:06.655567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.949 [2024-11-20 07:31:06.655573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.949 [2024-11-20 07:31:06.655587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-11-20 07:31:06.665407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.949 [2024-11-20 07:31:06.665459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.949 [2024-11-20 07:31:06.665473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.949 [2024-11-20 07:31:06.665480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.949 [2024-11-20 07:31:06.665486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.949 [2024-11-20 07:31:06.665500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-11-20 07:31:06.675550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.949 [2024-11-20 07:31:06.675605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.949 [2024-11-20 07:31:06.675619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.949 [2024-11-20 07:31:06.675626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.949 [2024-11-20 07:31:06.675632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.949 [2024-11-20 07:31:06.675645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-11-20 07:31:06.685566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.949 [2024-11-20 07:31:06.685617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.949 [2024-11-20 07:31:06.685630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.949 [2024-11-20 07:31:06.685637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.949 [2024-11-20 07:31:06.685644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.949 [2024-11-20 07:31:06.685657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-11-20 07:31:06.695601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.949 [2024-11-20 07:31:06.695652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.949 [2024-11-20 07:31:06.695665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.949 [2024-11-20 07:31:06.695672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.949 [2024-11-20 07:31:06.695679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.949 [2024-11-20 07:31:06.695692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.949 qpair failed and we were unable to recover it. 00:30:31.949 [2024-11-20 07:31:06.705636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.949 [2024-11-20 07:31:06.705692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.949 [2024-11-20 07:31:06.705706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.949 [2024-11-20 07:31:06.705713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.949 [2024-11-20 07:31:06.705719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:31.949 [2024-11-20 07:31:06.705732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.949 qpair failed and we were unable to recover it. 00:30:32.212 [2024-11-20 07:31:06.715652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.212 [2024-11-20 07:31:06.715704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.212 [2024-11-20 07:31:06.715719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.212 [2024-11-20 07:31:06.715726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.212 [2024-11-20 07:31:06.715732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.212 [2024-11-20 07:31:06.715746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.212 qpair failed and we were unable to recover it. 00:30:32.212 [2024-11-20 07:31:06.725663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.212 [2024-11-20 07:31:06.725718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.212 [2024-11-20 07:31:06.725735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.212 [2024-11-20 07:31:06.725742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.212 [2024-11-20 07:31:06.725748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.212 [2024-11-20 07:31:06.725762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.212 qpair failed and we were unable to recover it. 00:30:32.212 [2024-11-20 07:31:06.735716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.212 [2024-11-20 07:31:06.735801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.212 [2024-11-20 07:31:06.735815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.212 [2024-11-20 07:31:06.735822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.212 [2024-11-20 07:31:06.735828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.212 [2024-11-20 07:31:06.735841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.212 qpair failed and we were unable to recover it. 00:30:32.212 [2024-11-20 07:31:06.745748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.212 [2024-11-20 07:31:06.745799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.212 [2024-11-20 07:31:06.745812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.212 [2024-11-20 07:31:06.745819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.212 [2024-11-20 07:31:06.745826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.212 [2024-11-20 07:31:06.745839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.212 qpair failed and we were unable to recover it. 00:30:32.212 [2024-11-20 07:31:06.755783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.212 [2024-11-20 07:31:06.755882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.212 [2024-11-20 07:31:06.755896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.212 [2024-11-20 07:31:06.755903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.212 [2024-11-20 07:31:06.755909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.212 [2024-11-20 07:31:06.755923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.212 qpair failed and we were unable to recover it. 00:30:32.212 [2024-11-20 07:31:06.765794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.212 [2024-11-20 07:31:06.765848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.213 [2024-11-20 07:31:06.765866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.213 [2024-11-20 07:31:06.765876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.213 [2024-11-20 07:31:06.765883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.213 [2024-11-20 07:31:06.765896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.213 qpair failed and we were unable to recover it. 00:30:32.213 [2024-11-20 07:31:06.775717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.213 [2024-11-20 07:31:06.775781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.213 [2024-11-20 07:31:06.775795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.213 [2024-11-20 07:31:06.775802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.213 [2024-11-20 07:31:06.775808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.213 [2024-11-20 07:31:06.775821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.213 qpair failed and we were unable to recover it. 00:30:32.213 [2024-11-20 07:31:06.785880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.213 [2024-11-20 07:31:06.785940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.213 [2024-11-20 07:31:06.785954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.213 [2024-11-20 07:31:06.785961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.213 [2024-11-20 07:31:06.785967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.213 [2024-11-20 07:31:06.785980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.213 qpair failed and we were unable to recover it. 00:30:32.213 [2024-11-20 07:31:06.795884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.213 [2024-11-20 07:31:06.795935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.213 [2024-11-20 07:31:06.795948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.213 [2024-11-20 07:31:06.795955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.213 [2024-11-20 07:31:06.795961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.213 [2024-11-20 07:31:06.795975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.213 qpair failed and we were unable to recover it. 00:30:32.213 [2024-11-20 07:31:06.805928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.213 [2024-11-20 07:31:06.805987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.213 [2024-11-20 07:31:06.806001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.213 [2024-11-20 07:31:06.806007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.213 [2024-11-20 07:31:06.806014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.213 [2024-11-20 07:31:06.806028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.213 qpair failed and we were unable to recover it. 00:30:32.213 [2024-11-20 07:31:06.815947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.213 [2024-11-20 07:31:06.816005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.213 [2024-11-20 07:31:06.816020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.213 [2024-11-20 07:31:06.816028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.213 [2024-11-20 07:31:06.816034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.213 [2024-11-20 07:31:06.816048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.213 qpair failed and we were unable to recover it. 00:30:32.213 [2024-11-20 07:31:06.825970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.213 [2024-11-20 07:31:06.826032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.213 [2024-11-20 07:31:06.826046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.213 [2024-11-20 07:31:06.826053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.213 [2024-11-20 07:31:06.826059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.213 [2024-11-20 07:31:06.826073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.213 qpair failed and we were unable to recover it. 00:30:32.213 [2024-11-20 07:31:06.836011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.213 [2024-11-20 07:31:06.836059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.213 [2024-11-20 07:31:06.836073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.213 [2024-11-20 07:31:06.836080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.213 [2024-11-20 07:31:06.836086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.213 [2024-11-20 07:31:06.836099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.213 qpair failed and we were unable to recover it. 00:30:32.213 [2024-11-20 07:31:06.846024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.213 [2024-11-20 07:31:06.846075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.213 [2024-11-20 07:31:06.846088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.213 [2024-11-20 07:31:06.846095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.213 [2024-11-20 07:31:06.846101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.213 [2024-11-20 07:31:06.846115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.213 qpair failed and we were unable to recover it. 00:30:32.213 [2024-11-20 07:31:06.856051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.213 [2024-11-20 07:31:06.856106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.213 [2024-11-20 07:31:06.856123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.213 [2024-11-20 07:31:06.856130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.213 [2024-11-20 07:31:06.856136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.213 [2024-11-20 07:31:06.856149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.213 qpair failed and we were unable to recover it. 00:30:32.213 [2024-11-20 07:31:06.865996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.213 [2024-11-20 07:31:06.866093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.213 [2024-11-20 07:31:06.866107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.213 [2024-11-20 07:31:06.866114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.213 [2024-11-20 07:31:06.866121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.213 [2024-11-20 07:31:06.866134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.213 qpair failed and we were unable to recover it. 00:30:32.213 [2024-11-20 07:31:06.876102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.213 [2024-11-20 07:31:06.876157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.213 [2024-11-20 07:31:06.876171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.213 [2024-11-20 07:31:06.876178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.213 [2024-11-20 07:31:06.876184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.213 [2024-11-20 07:31:06.876198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.213 qpair failed and we were unable to recover it. 00:30:32.213 [2024-11-20 07:31:06.886148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.213 [2024-11-20 07:31:06.886202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.214 [2024-11-20 07:31:06.886216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.214 [2024-11-20 07:31:06.886223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.214 [2024-11-20 07:31:06.886229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.214 [2024-11-20 07:31:06.886242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.214 qpair failed and we were unable to recover it. 00:30:32.214 [2024-11-20 07:31:06.896191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.214 [2024-11-20 07:31:06.896248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.214 [2024-11-20 07:31:06.896261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.214 [2024-11-20 07:31:06.896272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.214 [2024-11-20 07:31:06.896279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.214 [2024-11-20 07:31:06.896293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.214 qpair failed and we were unable to recover it. 00:30:32.214 [2024-11-20 07:31:06.906212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.214 [2024-11-20 07:31:06.906286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.214 [2024-11-20 07:31:06.906299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.214 [2024-11-20 07:31:06.906306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.214 [2024-11-20 07:31:06.906312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.214 [2024-11-20 07:31:06.906325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.214 qpair failed and we were unable to recover it. 00:30:32.214 [2024-11-20 07:31:06.916228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.214 [2024-11-20 07:31:06.916279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.214 [2024-11-20 07:31:06.916292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.214 [2024-11-20 07:31:06.916299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.214 [2024-11-20 07:31:06.916305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.214 [2024-11-20 07:31:06.916318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.214 qpair failed and we were unable to recover it. 00:30:32.214 [2024-11-20 07:31:06.926253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.214 [2024-11-20 07:31:06.926313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.214 [2024-11-20 07:31:06.926325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.214 [2024-11-20 07:31:06.926332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.214 [2024-11-20 07:31:06.926338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.214 [2024-11-20 07:31:06.926352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.214 qpair failed and we were unable to recover it. 00:30:32.214 [2024-11-20 07:31:06.936208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.214 [2024-11-20 07:31:06.936267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.214 [2024-11-20 07:31:06.936281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.214 [2024-11-20 07:31:06.936288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.214 [2024-11-20 07:31:06.936294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.214 [2024-11-20 07:31:06.936307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.214 qpair failed and we were unable to recover it. 00:30:32.214 [2024-11-20 07:31:06.946321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.214 [2024-11-20 07:31:06.946377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.214 [2024-11-20 07:31:06.946391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.214 [2024-11-20 07:31:06.946398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.214 [2024-11-20 07:31:06.946404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.214 [2024-11-20 07:31:06.946417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.214 qpair failed and we were unable to recover it. 00:30:32.214 [2024-11-20 07:31:06.956337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.214 [2024-11-20 07:31:06.956391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.214 [2024-11-20 07:31:06.956404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.214 [2024-11-20 07:31:06.956411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.214 [2024-11-20 07:31:06.956418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.214 [2024-11-20 07:31:06.956431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.214 qpair failed and we were unable to recover it. 00:30:32.214 [2024-11-20 07:31:06.966255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.214 [2024-11-20 07:31:06.966306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.214 [2024-11-20 07:31:06.966319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.214 [2024-11-20 07:31:06.966326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.214 [2024-11-20 07:31:06.966332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.214 [2024-11-20 07:31:06.966345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.214 qpair failed and we were unable to recover it. 00:30:32.477 [2024-11-20 07:31:06.976429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.477 [2024-11-20 07:31:06.976509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.477 [2024-11-20 07:31:06.976523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.477 [2024-11-20 07:31:06.976531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.477 [2024-11-20 07:31:06.976537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.477 [2024-11-20 07:31:06.976551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.477 qpair failed and we were unable to recover it. 00:30:32.477 [2024-11-20 07:31:06.986488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.477 [2024-11-20 07:31:06.986545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.477 [2024-11-20 07:31:06.986558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.477 [2024-11-20 07:31:06.986565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.477 [2024-11-20 07:31:06.986572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.477 [2024-11-20 07:31:06.986585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.477 qpair failed and we were unable to recover it. 00:30:32.477 [2024-11-20 07:31:06.996456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.477 [2024-11-20 07:31:06.996508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.477 [2024-11-20 07:31:06.996522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.477 [2024-11-20 07:31:06.996528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.477 [2024-11-20 07:31:06.996534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.477 [2024-11-20 07:31:06.996548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.477 qpair failed and we were unable to recover it. 00:30:32.477 [2024-11-20 07:31:07.006489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.477 [2024-11-20 07:31:07.006539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.477 [2024-11-20 07:31:07.006553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.477 [2024-11-20 07:31:07.006560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.477 [2024-11-20 07:31:07.006566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.477 [2024-11-20 07:31:07.006579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.477 qpair failed and we were unable to recover it. 00:30:32.477 [2024-11-20 07:31:07.016396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.477 [2024-11-20 07:31:07.016450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.477 [2024-11-20 07:31:07.016464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.477 [2024-11-20 07:31:07.016470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.477 [2024-11-20 07:31:07.016477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.477 [2024-11-20 07:31:07.016490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.477 qpair failed and we were unable to recover it. 00:30:32.477 [2024-11-20 07:31:07.026448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.477 [2024-11-20 07:31:07.026534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.477 [2024-11-20 07:31:07.026550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.477 [2024-11-20 07:31:07.026562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.477 [2024-11-20 07:31:07.026572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.477 [2024-11-20 07:31:07.026587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.477 qpair failed and we were unable to recover it. 00:30:32.477 [2024-11-20 07:31:07.036568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.477 [2024-11-20 07:31:07.036618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.478 [2024-11-20 07:31:07.036632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.478 [2024-11-20 07:31:07.036639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.478 [2024-11-20 07:31:07.036645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.478 [2024-11-20 07:31:07.036659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.478 qpair failed and we were unable to recover it. 00:30:32.478 [2024-11-20 07:31:07.046603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.478 [2024-11-20 07:31:07.046653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.478 [2024-11-20 07:31:07.046667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.478 [2024-11-20 07:31:07.046674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.478 [2024-11-20 07:31:07.046680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.478 [2024-11-20 07:31:07.046694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.478 qpair failed and we were unable to recover it. 00:30:32.478 [2024-11-20 07:31:07.056646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.478 [2024-11-20 07:31:07.056701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.478 [2024-11-20 07:31:07.056714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.478 [2024-11-20 07:31:07.056721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.478 [2024-11-20 07:31:07.056727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.478 [2024-11-20 07:31:07.056741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.478 qpair failed and we were unable to recover it. 00:30:32.478 [2024-11-20 07:31:07.066684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.478 [2024-11-20 07:31:07.066776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.478 [2024-11-20 07:31:07.066790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.478 [2024-11-20 07:31:07.066797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.478 [2024-11-20 07:31:07.066803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.478 [2024-11-20 07:31:07.066817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.478 qpair failed and we were unable to recover it. 00:30:32.478 [2024-11-20 07:31:07.076697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.478 [2024-11-20 07:31:07.076747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.478 [2024-11-20 07:31:07.076761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.478 [2024-11-20 07:31:07.076768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.478 [2024-11-20 07:31:07.076774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.478 [2024-11-20 07:31:07.076787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.478 qpair failed and we were unable to recover it. 00:30:32.478 [2024-11-20 07:31:07.086745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.478 [2024-11-20 07:31:07.086843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.478 [2024-11-20 07:31:07.086857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.478 [2024-11-20 07:31:07.086870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.478 [2024-11-20 07:31:07.086877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.478 [2024-11-20 07:31:07.086891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.478 qpair failed and we were unable to recover it. 00:30:32.478 [2024-11-20 07:31:07.096764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.478 [2024-11-20 07:31:07.096819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.478 [2024-11-20 07:31:07.096832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.478 [2024-11-20 07:31:07.096839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.478 [2024-11-20 07:31:07.096845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.478 [2024-11-20 07:31:07.096858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.478 qpair failed and we were unable to recover it. 00:30:32.478 [2024-11-20 07:31:07.106764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.478 [2024-11-20 07:31:07.106831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.478 [2024-11-20 07:31:07.106844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.478 [2024-11-20 07:31:07.106851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.478 [2024-11-20 07:31:07.106858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.478 [2024-11-20 07:31:07.106877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.478 qpair failed and we were unable to recover it. 00:30:32.478 [2024-11-20 07:31:07.116787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.478 [2024-11-20 07:31:07.116842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.478 [2024-11-20 07:31:07.116855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.478 [2024-11-20 07:31:07.116869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.478 [2024-11-20 07:31:07.116876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.478 [2024-11-20 07:31:07.116889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.478 qpair failed and we were unable to recover it. 00:30:32.478 [2024-11-20 07:31:07.126841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.478 [2024-11-20 07:31:07.126919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.478 [2024-11-20 07:31:07.126932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.478 [2024-11-20 07:31:07.126939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.478 [2024-11-20 07:31:07.126945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.478 [2024-11-20 07:31:07.126959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.478 qpair failed and we were unable to recover it. 00:30:32.478 [2024-11-20 07:31:07.136868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.478 [2024-11-20 07:31:07.136922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.478 [2024-11-20 07:31:07.136935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.478 [2024-11-20 07:31:07.136942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.478 [2024-11-20 07:31:07.136948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.478 [2024-11-20 07:31:07.136962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.478 qpair failed and we were unable to recover it. 00:30:32.478 [2024-11-20 07:31:07.146894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.478 [2024-11-20 07:31:07.146951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.478 [2024-11-20 07:31:07.146965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.478 [2024-11-20 07:31:07.146972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.479 [2024-11-20 07:31:07.146978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.479 [2024-11-20 07:31:07.146992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.479 qpair failed and we were unable to recover it. 00:30:32.479 [2024-11-20 07:31:07.156913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.479 [2024-11-20 07:31:07.156975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.479 [2024-11-20 07:31:07.156988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.479 [2024-11-20 07:31:07.156998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.479 [2024-11-20 07:31:07.157004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.479 [2024-11-20 07:31:07.157018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.479 qpair failed and we were unable to recover it. 00:30:32.479 [2024-11-20 07:31:07.166825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.479 [2024-11-20 07:31:07.166888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.479 [2024-11-20 07:31:07.166902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.479 [2024-11-20 07:31:07.166909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.479 [2024-11-20 07:31:07.166915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.479 [2024-11-20 07:31:07.166928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.479 qpair failed and we were unable to recover it. 00:30:32.479 [2024-11-20 07:31:07.176963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.479 [2024-11-20 07:31:07.177022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.479 [2024-11-20 07:31:07.177036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.479 [2024-11-20 07:31:07.177044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.479 [2024-11-20 07:31:07.177050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.479 [2024-11-20 07:31:07.177064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.479 qpair failed and we were unable to recover it. 00:30:32.479 [2024-11-20 07:31:07.186977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.479 [2024-11-20 07:31:07.187036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.479 [2024-11-20 07:31:07.187051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.479 [2024-11-20 07:31:07.187058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.479 [2024-11-20 07:31:07.187066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.479 [2024-11-20 07:31:07.187081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.479 qpair failed and we were unable to recover it. 00:30:32.479 [2024-11-20 07:31:07.197025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.479 [2024-11-20 07:31:07.197126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.479 [2024-11-20 07:31:07.197140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.479 [2024-11-20 07:31:07.197147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.479 [2024-11-20 07:31:07.197153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.479 [2024-11-20 07:31:07.197166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.479 qpair failed and we were unable to recover it. 00:30:32.479 [2024-11-20 07:31:07.207046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.479 [2024-11-20 07:31:07.207105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.479 [2024-11-20 07:31:07.207119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.479 [2024-11-20 07:31:07.207126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.479 [2024-11-20 07:31:07.207133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.479 [2024-11-20 07:31:07.207146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.479 qpair failed and we were unable to recover it. 00:30:32.479 [2024-11-20 07:31:07.217116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.479 [2024-11-20 07:31:07.217171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.479 [2024-11-20 07:31:07.217184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.479 [2024-11-20 07:31:07.217190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.479 [2024-11-20 07:31:07.217197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.479 [2024-11-20 07:31:07.217210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.479 qpair failed and we were unable to recover it. 00:30:32.479 [2024-11-20 07:31:07.227122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.479 [2024-11-20 07:31:07.227175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.479 [2024-11-20 07:31:07.227188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.479 [2024-11-20 07:31:07.227195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.479 [2024-11-20 07:31:07.227201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.479 [2024-11-20 07:31:07.227214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.479 qpair failed and we were unable to recover it. 00:30:32.479 [2024-11-20 07:31:07.237135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.479 [2024-11-20 07:31:07.237192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.479 [2024-11-20 07:31:07.237205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.479 [2024-11-20 07:31:07.237213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.479 [2024-11-20 07:31:07.237219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.479 [2024-11-20 07:31:07.237232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.479 qpair failed and we were unable to recover it. 00:30:32.743 [2024-11-20 07:31:07.247207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.743 [2024-11-20 07:31:07.247285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.743 [2024-11-20 07:31:07.247299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.743 [2024-11-20 07:31:07.247306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.743 [2024-11-20 07:31:07.247312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.743 [2024-11-20 07:31:07.247326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.743 qpair failed and we were unable to recover it. 00:30:32.743 [2024-11-20 07:31:07.257211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.743 [2024-11-20 07:31:07.257267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.743 [2024-11-20 07:31:07.257280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.743 [2024-11-20 07:31:07.257287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.743 [2024-11-20 07:31:07.257293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.743 [2024-11-20 07:31:07.257306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.743 qpair failed and we were unable to recover it. 00:30:32.743 [2024-11-20 07:31:07.267220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.743 [2024-11-20 07:31:07.267273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.743 [2024-11-20 07:31:07.267286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.743 [2024-11-20 07:31:07.267293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.743 [2024-11-20 07:31:07.267299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.743 [2024-11-20 07:31:07.267312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.743 qpair failed and we were unable to recover it. 00:30:32.743 [2024-11-20 07:31:07.277126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.743 [2024-11-20 07:31:07.277179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.743 [2024-11-20 07:31:07.277192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.743 [2024-11-20 07:31:07.277199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.743 [2024-11-20 07:31:07.277205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.743 [2024-11-20 07:31:07.277219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.743 qpair failed and we were unable to recover it. 00:30:32.743 [2024-11-20 07:31:07.287171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.743 [2024-11-20 07:31:07.287231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.743 [2024-11-20 07:31:07.287244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.743 [2024-11-20 07:31:07.287254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.743 [2024-11-20 07:31:07.287261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.743 [2024-11-20 07:31:07.287274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.743 qpair failed and we were unable to recover it. 00:30:32.743 [2024-11-20 07:31:07.297301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.743 [2024-11-20 07:31:07.297354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.743 [2024-11-20 07:31:07.297368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.743 [2024-11-20 07:31:07.297375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.743 [2024-11-20 07:31:07.297381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.743 [2024-11-20 07:31:07.297395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.743 qpair failed and we were unable to recover it. 00:30:32.743 [2024-11-20 07:31:07.307349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.743 [2024-11-20 07:31:07.307403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.743 [2024-11-20 07:31:07.307418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.743 [2024-11-20 07:31:07.307425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.743 [2024-11-20 07:31:07.307432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.743 [2024-11-20 07:31:07.307445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.743 qpair failed and we were unable to recover it. 00:30:32.744 [2024-11-20 07:31:07.317340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.744 [2024-11-20 07:31:07.317388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.744 [2024-11-20 07:31:07.317401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.744 [2024-11-20 07:31:07.317408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.744 [2024-11-20 07:31:07.317414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.744 [2024-11-20 07:31:07.317427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.744 qpair failed and we were unable to recover it. 00:30:32.744 [2024-11-20 07:31:07.327405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.744 [2024-11-20 07:31:07.327461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.744 [2024-11-20 07:31:07.327475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.744 [2024-11-20 07:31:07.327481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.744 [2024-11-20 07:31:07.327488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.744 [2024-11-20 07:31:07.327504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.744 qpair failed and we were unable to recover it. 00:30:32.744 [2024-11-20 07:31:07.337479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.744 [2024-11-20 07:31:07.337533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.744 [2024-11-20 07:31:07.337547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.744 [2024-11-20 07:31:07.337554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.744 [2024-11-20 07:31:07.337561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.744 [2024-11-20 07:31:07.337574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.744 qpair failed and we were unable to recover it. 00:30:32.744 [2024-11-20 07:31:07.347336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.744 [2024-11-20 07:31:07.347395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.744 [2024-11-20 07:31:07.347408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.744 [2024-11-20 07:31:07.347415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.744 [2024-11-20 07:31:07.347421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.744 [2024-11-20 07:31:07.347435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.744 qpair failed and we were unable to recover it. 00:30:32.744 [2024-11-20 07:31:07.357474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.744 [2024-11-20 07:31:07.357542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.744 [2024-11-20 07:31:07.357555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.744 [2024-11-20 07:31:07.357562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.744 [2024-11-20 07:31:07.357568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.744 [2024-11-20 07:31:07.357581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.744 qpair failed and we were unable to recover it. 00:30:32.744 [2024-11-20 07:31:07.367509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.744 [2024-11-20 07:31:07.367561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.744 [2024-11-20 07:31:07.367574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.744 [2024-11-20 07:31:07.367581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.744 [2024-11-20 07:31:07.367587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.744 [2024-11-20 07:31:07.367601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.744 qpair failed and we were unable to recover it. 00:30:32.744 [2024-11-20 07:31:07.377529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.744 [2024-11-20 07:31:07.377588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.744 [2024-11-20 07:31:07.377602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.744 [2024-11-20 07:31:07.377608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.744 [2024-11-20 07:31:07.377615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.744 [2024-11-20 07:31:07.377629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.744 qpair failed and we were unable to recover it. 00:30:32.744 [2024-11-20 07:31:07.387448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.744 [2024-11-20 07:31:07.387499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.744 [2024-11-20 07:31:07.387512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.744 [2024-11-20 07:31:07.387519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.744 [2024-11-20 07:31:07.387526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.744 [2024-11-20 07:31:07.387539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.744 qpair failed and we were unable to recover it. 00:30:32.744 [2024-11-20 07:31:07.397482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.744 [2024-11-20 07:31:07.397538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.744 [2024-11-20 07:31:07.397551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.744 [2024-11-20 07:31:07.397558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.744 [2024-11-20 07:31:07.397564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.744 [2024-11-20 07:31:07.397578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.744 qpair failed and we were unable to recover it. 00:30:32.744 [2024-11-20 07:31:07.407591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.744 [2024-11-20 07:31:07.407645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.744 [2024-11-20 07:31:07.407658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.744 [2024-11-20 07:31:07.407665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.744 [2024-11-20 07:31:07.407672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.744 [2024-11-20 07:31:07.407685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.744 qpair failed and we were unable to recover it. 00:30:32.744 [2024-11-20 07:31:07.417634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.744 [2024-11-20 07:31:07.417692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.744 [2024-11-20 07:31:07.417705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.744 [2024-11-20 07:31:07.417716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.744 [2024-11-20 07:31:07.417722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.744 [2024-11-20 07:31:07.417735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.744 qpair failed and we were unable to recover it. 00:30:32.744 [2024-11-20 07:31:07.427684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.744 [2024-11-20 07:31:07.427737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.744 [2024-11-20 07:31:07.427750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.744 [2024-11-20 07:31:07.427757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.744 [2024-11-20 07:31:07.427763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.744 [2024-11-20 07:31:07.427777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.744 qpair failed and we were unable to recover it. 00:30:32.744 [2024-11-20 07:31:07.437664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.744 [2024-11-20 07:31:07.437719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.744 [2024-11-20 07:31:07.437732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.744 [2024-11-20 07:31:07.437739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.744 [2024-11-20 07:31:07.437745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.744 [2024-11-20 07:31:07.437758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.745 qpair failed and we were unable to recover it. 00:30:32.745 [2024-11-20 07:31:07.447731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.745 [2024-11-20 07:31:07.447778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.745 [2024-11-20 07:31:07.447792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.745 [2024-11-20 07:31:07.447799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.745 [2024-11-20 07:31:07.447805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.745 [2024-11-20 07:31:07.447819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.745 qpair failed and we were unable to recover it. 00:30:32.745 [2024-11-20 07:31:07.457753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.745 [2024-11-20 07:31:07.457809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.745 [2024-11-20 07:31:07.457823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.745 [2024-11-20 07:31:07.457830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.745 [2024-11-20 07:31:07.457836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.745 [2024-11-20 07:31:07.457853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.745 qpair failed and we were unable to recover it. 00:30:32.745 [2024-11-20 07:31:07.467767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.745 [2024-11-20 07:31:07.467877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.745 [2024-11-20 07:31:07.467891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.745 [2024-11-20 07:31:07.467898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.745 [2024-11-20 07:31:07.467905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.745 [2024-11-20 07:31:07.467918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.745 qpair failed and we were unable to recover it. 00:30:32.745 [2024-11-20 07:31:07.477816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.745 [2024-11-20 07:31:07.477871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.745 [2024-11-20 07:31:07.477884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.745 [2024-11-20 07:31:07.477891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.745 [2024-11-20 07:31:07.477897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.745 [2024-11-20 07:31:07.477911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.745 qpair failed and we were unable to recover it. 00:30:32.745 [2024-11-20 07:31:07.487848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.745 [2024-11-20 07:31:07.487904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.745 [2024-11-20 07:31:07.487917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.745 [2024-11-20 07:31:07.487924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.745 [2024-11-20 07:31:07.487930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.745 [2024-11-20 07:31:07.487944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.745 qpair failed and we were unable to recover it. 00:30:32.745 [2024-11-20 07:31:07.497885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.745 [2024-11-20 07:31:07.497963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.745 [2024-11-20 07:31:07.497977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.745 [2024-11-20 07:31:07.497984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.745 [2024-11-20 07:31:07.497990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:32.745 [2024-11-20 07:31:07.498004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.745 qpair failed and we were unable to recover it. 00:30:33.009 [2024-11-20 07:31:07.507905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.009 [2024-11-20 07:31:07.507971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.009 [2024-11-20 07:31:07.507985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.009 [2024-11-20 07:31:07.507992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.009 [2024-11-20 07:31:07.507998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.009 [2024-11-20 07:31:07.508011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.009 qpair failed and we were unable to recover it. 00:30:33.009 [2024-11-20 07:31:07.517805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.009 [2024-11-20 07:31:07.517857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.009 [2024-11-20 07:31:07.517875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.009 [2024-11-20 07:31:07.517882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.009 [2024-11-20 07:31:07.517888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.009 [2024-11-20 07:31:07.517902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.009 qpair failed and we were unable to recover it. 00:30:33.009 [2024-11-20 07:31:07.527978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.009 [2024-11-20 07:31:07.528027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.009 [2024-11-20 07:31:07.528040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.009 [2024-11-20 07:31:07.528047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.009 [2024-11-20 07:31:07.528053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.009 [2024-11-20 07:31:07.528066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.009 qpair failed and we were unable to recover it. 00:30:33.009 [2024-11-20 07:31:07.537911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.009 [2024-11-20 07:31:07.537967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.009 [2024-11-20 07:31:07.537980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.009 [2024-11-20 07:31:07.537987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.009 [2024-11-20 07:31:07.537993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.009 [2024-11-20 07:31:07.538007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.009 qpair failed and we were unable to recover it. 00:30:33.009 [2024-11-20 07:31:07.548053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.009 [2024-11-20 07:31:07.548120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.009 [2024-11-20 07:31:07.548133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.009 [2024-11-20 07:31:07.548144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.009 [2024-11-20 07:31:07.548150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.009 [2024-11-20 07:31:07.548164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.009 qpair failed and we were unable to recover it. 00:30:33.009 [2024-11-20 07:31:07.558068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.009 [2024-11-20 07:31:07.558125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.009 [2024-11-20 07:31:07.558138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.009 [2024-11-20 07:31:07.558145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.009 [2024-11-20 07:31:07.558151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.009 [2024-11-20 07:31:07.558164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.009 qpair failed and we were unable to recover it. 00:30:33.009 [2024-11-20 07:31:07.568039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.009 [2024-11-20 07:31:07.568091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.009 [2024-11-20 07:31:07.568104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.009 [2024-11-20 07:31:07.568111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.009 [2024-11-20 07:31:07.568117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.009 [2024-11-20 07:31:07.568130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.009 qpair failed and we were unable to recover it. 00:30:33.009 [2024-11-20 07:31:07.578117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.009 [2024-11-20 07:31:07.578169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.009 [2024-11-20 07:31:07.578182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.009 [2024-11-20 07:31:07.578188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.009 [2024-11-20 07:31:07.578195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.009 [2024-11-20 07:31:07.578208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.010 qpair failed and we were unable to recover it. 00:30:33.010 [2024-11-20 07:31:07.588166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.010 [2024-11-20 07:31:07.588219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.010 [2024-11-20 07:31:07.588233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.010 [2024-11-20 07:31:07.588240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.010 [2024-11-20 07:31:07.588246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.010 [2024-11-20 07:31:07.588263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.010 qpair failed and we were unable to recover it. 00:30:33.010 [2024-11-20 07:31:07.598204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.010 [2024-11-20 07:31:07.598284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.010 [2024-11-20 07:31:07.598297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.010 [2024-11-20 07:31:07.598304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.010 [2024-11-20 07:31:07.598310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.010 [2024-11-20 07:31:07.598323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.010 qpair failed and we were unable to recover it. 00:30:33.010 [2024-11-20 07:31:07.608196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.010 [2024-11-20 07:31:07.608249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.010 [2024-11-20 07:31:07.608262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.010 [2024-11-20 07:31:07.608269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.010 [2024-11-20 07:31:07.608276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.010 [2024-11-20 07:31:07.608289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.010 qpair failed and we were unable to recover it. 00:30:33.010 [2024-11-20 07:31:07.618228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.010 [2024-11-20 07:31:07.618325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.010 [2024-11-20 07:31:07.618338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.010 [2024-11-20 07:31:07.618345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.010 [2024-11-20 07:31:07.618351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.010 [2024-11-20 07:31:07.618364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.010 qpair failed and we were unable to recover it. 00:30:33.010 [2024-11-20 07:31:07.628278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.010 [2024-11-20 07:31:07.628350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.010 [2024-11-20 07:31:07.628363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.010 [2024-11-20 07:31:07.628370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.010 [2024-11-20 07:31:07.628376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.010 [2024-11-20 07:31:07.628389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.010 qpair failed and we were unable to recover it. 00:30:33.010 [2024-11-20 07:31:07.638311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.010 [2024-11-20 07:31:07.638370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.010 [2024-11-20 07:31:07.638383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.010 [2024-11-20 07:31:07.638390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.010 [2024-11-20 07:31:07.638396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.010 [2024-11-20 07:31:07.638410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.010 qpair failed and we were unable to recover it. 00:30:33.010 [2024-11-20 07:31:07.648174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.010 [2024-11-20 07:31:07.648224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.010 [2024-11-20 07:31:07.648237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.010 [2024-11-20 07:31:07.648244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.010 [2024-11-20 07:31:07.648251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.010 [2024-11-20 07:31:07.648264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.010 qpair failed and we were unable to recover it. 00:30:33.010 [2024-11-20 07:31:07.658338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.010 [2024-11-20 07:31:07.658394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.010 [2024-11-20 07:31:07.658407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.010 [2024-11-20 07:31:07.658414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.010 [2024-11-20 07:31:07.658420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.010 [2024-11-20 07:31:07.658433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.010 qpair failed and we were unable to recover it. 00:30:33.010 [2024-11-20 07:31:07.668383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.010 [2024-11-20 07:31:07.668437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.010 [2024-11-20 07:31:07.668451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.010 [2024-11-20 07:31:07.668458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.010 [2024-11-20 07:31:07.668464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.010 [2024-11-20 07:31:07.668477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.010 qpair failed and we were unable to recover it. 00:30:33.010 [2024-11-20 07:31:07.678401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.010 [2024-11-20 07:31:07.678453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.010 [2024-11-20 07:31:07.678467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.010 [2024-11-20 07:31:07.678478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.010 [2024-11-20 07:31:07.678484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.010 [2024-11-20 07:31:07.678498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.010 qpair failed and we were unable to recover it. 00:30:33.010 [2024-11-20 07:31:07.688424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.010 [2024-11-20 07:31:07.688524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.010 [2024-11-20 07:31:07.688538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.010 [2024-11-20 07:31:07.688545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.010 [2024-11-20 07:31:07.688551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.010 [2024-11-20 07:31:07.688564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.010 qpair failed and we were unable to recover it. 00:30:33.010 [2024-11-20 07:31:07.698448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.010 [2024-11-20 07:31:07.698546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.010 [2024-11-20 07:31:07.698560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.010 [2024-11-20 07:31:07.698568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.010 [2024-11-20 07:31:07.698574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.010 [2024-11-20 07:31:07.698587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.010 qpair failed and we were unable to recover it. 00:30:33.010 [2024-11-20 07:31:07.708493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.010 [2024-11-20 07:31:07.708548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.010 [2024-11-20 07:31:07.708562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.010 [2024-11-20 07:31:07.708569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.011 [2024-11-20 07:31:07.708575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.011 [2024-11-20 07:31:07.708589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.011 qpair failed and we were unable to recover it. 00:30:33.011 [2024-11-20 07:31:07.718492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.011 [2024-11-20 07:31:07.718545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.011 [2024-11-20 07:31:07.718558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.011 [2024-11-20 07:31:07.718565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.011 [2024-11-20 07:31:07.718571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.011 [2024-11-20 07:31:07.718588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.011 qpair failed and we were unable to recover it. 00:30:33.011 [2024-11-20 07:31:07.728518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.011 [2024-11-20 07:31:07.728569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.011 [2024-11-20 07:31:07.728582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.011 [2024-11-20 07:31:07.728589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.011 [2024-11-20 07:31:07.728595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.011 [2024-11-20 07:31:07.728609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.011 qpair failed and we were unable to recover it. 00:30:33.011 [2024-11-20 07:31:07.738556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.011 [2024-11-20 07:31:07.738613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.011 [2024-11-20 07:31:07.738627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.011 [2024-11-20 07:31:07.738634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.011 [2024-11-20 07:31:07.738640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.011 [2024-11-20 07:31:07.738653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.011 qpair failed and we were unable to recover it. 00:30:33.011 [2024-11-20 07:31:07.748586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.011 [2024-11-20 07:31:07.748686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.011 [2024-11-20 07:31:07.748700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.011 [2024-11-20 07:31:07.748707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.011 [2024-11-20 07:31:07.748713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.011 [2024-11-20 07:31:07.748727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.011 qpair failed and we were unable to recover it. 00:30:33.011 [2024-11-20 07:31:07.758581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.011 [2024-11-20 07:31:07.758629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.011 [2024-11-20 07:31:07.758642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.011 [2024-11-20 07:31:07.758649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.011 [2024-11-20 07:31:07.758655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.011 [2024-11-20 07:31:07.758669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.011 qpair failed and we were unable to recover it. 00:30:33.011 [2024-11-20 07:31:07.768694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.011 [2024-11-20 07:31:07.768758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.011 [2024-11-20 07:31:07.768772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.011 [2024-11-20 07:31:07.768779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.011 [2024-11-20 07:31:07.768785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.011 [2024-11-20 07:31:07.768798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.011 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-20 07:31:07.778669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-20 07:31:07.778762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-20 07:31:07.778775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-20 07:31:07.778782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-20 07:31:07.778788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.274 [2024-11-20 07:31:07.778802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-20 07:31:07.788696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-20 07:31:07.788786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-20 07:31:07.788799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-20 07:31:07.788806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-20 07:31:07.788812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.274 [2024-11-20 07:31:07.788825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-20 07:31:07.798607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-20 07:31:07.798707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-20 07:31:07.798720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-20 07:31:07.798727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-20 07:31:07.798733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.274 [2024-11-20 07:31:07.798746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-20 07:31:07.808728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-20 07:31:07.808779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-20 07:31:07.808793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-20 07:31:07.808803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-20 07:31:07.808810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.274 [2024-11-20 07:31:07.808823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-20 07:31:07.818773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-20 07:31:07.818832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-20 07:31:07.818849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-20 07:31:07.818856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-20 07:31:07.818869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.274 [2024-11-20 07:31:07.818884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-20 07:31:07.828763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-20 07:31:07.828822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-20 07:31:07.828836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-20 07:31:07.828843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-20 07:31:07.828849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.274 [2024-11-20 07:31:07.828869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-20 07:31:07.838819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-20 07:31:07.838882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-20 07:31:07.838896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-20 07:31:07.838903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-20 07:31:07.838909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.274 [2024-11-20 07:31:07.838923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-20 07:31:07.848848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-20 07:31:07.848900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-20 07:31:07.848914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-20 07:31:07.848921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-20 07:31:07.848927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.274 [2024-11-20 07:31:07.848944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-20 07:31:07.858889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-20 07:31:07.858946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-20 07:31:07.858959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-20 07:31:07.858966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-20 07:31:07.858972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.274 [2024-11-20 07:31:07.858985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-20 07:31:07.868918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-20 07:31:07.868979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-20 07:31:07.868993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-20 07:31:07.869000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-20 07:31:07.869006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.274 [2024-11-20 07:31:07.869019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-20 07:31:07.878932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-20 07:31:07.878979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-20 07:31:07.878992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-20 07:31:07.878999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.274 [2024-11-20 07:31:07.879006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.274 [2024-11-20 07:31:07.879019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.274 qpair failed and we were unable to recover it. 00:30:33.274 [2024-11-20 07:31:07.888830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.274 [2024-11-20 07:31:07.888902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.274 [2024-11-20 07:31:07.888916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.274 [2024-11-20 07:31:07.888923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-20 07:31:07.888929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.275 [2024-11-20 07:31:07.888942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-20 07:31:07.899003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-20 07:31:07.899058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-20 07:31:07.899072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-20 07:31:07.899078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-20 07:31:07.899085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.275 [2024-11-20 07:31:07.899098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-20 07:31:07.909025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-20 07:31:07.909090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-20 07:31:07.909103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-20 07:31:07.909110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-20 07:31:07.909117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.275 [2024-11-20 07:31:07.909130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-20 07:31:07.919046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-20 07:31:07.919098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-20 07:31:07.919112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-20 07:31:07.919119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-20 07:31:07.919125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.275 [2024-11-20 07:31:07.919139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-20 07:31:07.929059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-20 07:31:07.929116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-20 07:31:07.929129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-20 07:31:07.929136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-20 07:31:07.929142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.275 [2024-11-20 07:31:07.929155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-20 07:31:07.939122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-20 07:31:07.939196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-20 07:31:07.939209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-20 07:31:07.939223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-20 07:31:07.939229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.275 [2024-11-20 07:31:07.939242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-20 07:31:07.949056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-20 07:31:07.949154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-20 07:31:07.949167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-20 07:31:07.949174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-20 07:31:07.949181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.275 [2024-11-20 07:31:07.949195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-20 07:31:07.959151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-20 07:31:07.959208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-20 07:31:07.959222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-20 07:31:07.959230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-20 07:31:07.959236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.275 [2024-11-20 07:31:07.959249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-20 07:31:07.969170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-20 07:31:07.969216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-20 07:31:07.969229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-20 07:31:07.969236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-20 07:31:07.969243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.275 [2024-11-20 07:31:07.969256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-20 07:31:07.979258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-20 07:31:07.979330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-20 07:31:07.979344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-20 07:31:07.979351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-20 07:31:07.979357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.275 [2024-11-20 07:31:07.979375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-20 07:31:07.989122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-20 07:31:07.989178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-20 07:31:07.989193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-20 07:31:07.989199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-20 07:31:07.989206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.275 [2024-11-20 07:31:07.989219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-20 07:31:07.999250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-20 07:31:07.999344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-20 07:31:07.999359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-20 07:31:07.999366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-20 07:31:07.999372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.275 [2024-11-20 07:31:07.999386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-20 07:31:08.009280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-20 07:31:08.009328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-20 07:31:08.009341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-20 07:31:08.009348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.275 [2024-11-20 07:31:08.009355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.275 [2024-11-20 07:31:08.009368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.275 qpair failed and we were unable to recover it. 00:30:33.275 [2024-11-20 07:31:08.019332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.275 [2024-11-20 07:31:08.019386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.275 [2024-11-20 07:31:08.019404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.275 [2024-11-20 07:31:08.019411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.276 [2024-11-20 07:31:08.019417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.276 [2024-11-20 07:31:08.019433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.276 qpair failed and we were unable to recover it. 00:30:33.276 [2024-11-20 07:31:08.029344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.276 [2024-11-20 07:31:08.029402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.276 [2024-11-20 07:31:08.029415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.276 [2024-11-20 07:31:08.029422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.276 [2024-11-20 07:31:08.029429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.276 [2024-11-20 07:31:08.029443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.276 qpair failed and we were unable to recover it. 00:30:33.538 [2024-11-20 07:31:08.039235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.538 [2024-11-20 07:31:08.039290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.538 [2024-11-20 07:31:08.039303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.538 [2024-11-20 07:31:08.039310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.538 [2024-11-20 07:31:08.039316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.538 [2024-11-20 07:31:08.039330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.538 qpair failed and we were unable to recover it. 00:30:33.538 [2024-11-20 07:31:08.049390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.539 [2024-11-20 07:31:08.049442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.539 [2024-11-20 07:31:08.049455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.539 [2024-11-20 07:31:08.049462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.539 [2024-11-20 07:31:08.049469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.539 [2024-11-20 07:31:08.049482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.539 qpair failed and we were unable to recover it. 00:30:33.539 [2024-11-20 07:31:08.059464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.539 [2024-11-20 07:31:08.059517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.539 [2024-11-20 07:31:08.059531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.539 [2024-11-20 07:31:08.059538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.539 [2024-11-20 07:31:08.059544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.539 [2024-11-20 07:31:08.059557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.539 qpair failed and we were unable to recover it. 00:30:33.539 [2024-11-20 07:31:08.069446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.539 [2024-11-20 07:31:08.069509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.539 [2024-11-20 07:31:08.069523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.539 [2024-11-20 07:31:08.069533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.539 [2024-11-20 07:31:08.069540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.539 [2024-11-20 07:31:08.069553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.539 qpair failed and we were unable to recover it. 00:30:33.539 [2024-11-20 07:31:08.079481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.539 [2024-11-20 07:31:08.079577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.539 [2024-11-20 07:31:08.079591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.539 [2024-11-20 07:31:08.079598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.539 [2024-11-20 07:31:08.079604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.539 [2024-11-20 07:31:08.079618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.539 qpair failed and we were unable to recover it. 00:30:33.539 [2024-11-20 07:31:08.089384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.539 [2024-11-20 07:31:08.089435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.539 [2024-11-20 07:31:08.089448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.539 [2024-11-20 07:31:08.089455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.539 [2024-11-20 07:31:08.089461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.539 [2024-11-20 07:31:08.089475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.539 qpair failed and we were unable to recover it. 00:30:33.539 [2024-11-20 07:31:08.099545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.539 [2024-11-20 07:31:08.099602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.539 [2024-11-20 07:31:08.099616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.539 [2024-11-20 07:31:08.099623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.539 [2024-11-20 07:31:08.099629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.539 [2024-11-20 07:31:08.099643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.539 qpair failed and we were unable to recover it. 00:30:33.539 [2024-11-20 07:31:08.109569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.539 [2024-11-20 07:31:08.109625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.539 [2024-11-20 07:31:08.109639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.539 [2024-11-20 07:31:08.109646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.539 [2024-11-20 07:31:08.109652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.539 [2024-11-20 07:31:08.109669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.539 qpair failed and we were unable to recover it. 00:30:33.539 [2024-11-20 07:31:08.119579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.539 [2024-11-20 07:31:08.119637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.539 [2024-11-20 07:31:08.119662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.539 [2024-11-20 07:31:08.119671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.539 [2024-11-20 07:31:08.119678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.539 [2024-11-20 07:31:08.119697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.539 qpair failed and we were unable to recover it. 00:30:33.539 [2024-11-20 07:31:08.129624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.539 [2024-11-20 07:31:08.129683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.539 [2024-11-20 07:31:08.129708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.539 [2024-11-20 07:31:08.129716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.539 [2024-11-20 07:31:08.129724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.539 [2024-11-20 07:31:08.129743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.539 qpair failed and we were unable to recover it. 00:30:33.539 [2024-11-20 07:31:08.139632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.539 [2024-11-20 07:31:08.139693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.539 [2024-11-20 07:31:08.139709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.539 [2024-11-20 07:31:08.139716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.539 [2024-11-20 07:31:08.139722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.539 [2024-11-20 07:31:08.139737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.539 qpair failed and we were unable to recover it. 00:30:33.539 [2024-11-20 07:31:08.149591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.539 [2024-11-20 07:31:08.149645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.539 [2024-11-20 07:31:08.149660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.539 [2024-11-20 07:31:08.149667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.539 [2024-11-20 07:31:08.149673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.539 [2024-11-20 07:31:08.149687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.539 qpair failed and we were unable to recover it. 00:30:33.539 [2024-11-20 07:31:08.159703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.539 [2024-11-20 07:31:08.159772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.539 [2024-11-20 07:31:08.159787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.539 [2024-11-20 07:31:08.159793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.539 [2024-11-20 07:31:08.159800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.539 [2024-11-20 07:31:08.159814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.539 qpair failed and we were unable to recover it. 00:30:33.539 [2024-11-20 07:31:08.169789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.539 [2024-11-20 07:31:08.169846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.539 [2024-11-20 07:31:08.169860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.539 [2024-11-20 07:31:08.169874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.539 [2024-11-20 07:31:08.169880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.539 [2024-11-20 07:31:08.169894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.539 qpair failed and we were unable to recover it. 00:30:33.540 [2024-11-20 07:31:08.179751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.540 [2024-11-20 07:31:08.179808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.540 [2024-11-20 07:31:08.179822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.540 [2024-11-20 07:31:08.179829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.540 [2024-11-20 07:31:08.179835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.540 [2024-11-20 07:31:08.179849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.540 qpair failed and we were unable to recover it. 00:30:33.540 [2024-11-20 07:31:08.189796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.540 [2024-11-20 07:31:08.189852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.540 [2024-11-20 07:31:08.189870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.540 [2024-11-20 07:31:08.189877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.540 [2024-11-20 07:31:08.189883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.540 [2024-11-20 07:31:08.189897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.540 qpair failed and we were unable to recover it. 00:30:33.540 [2024-11-20 07:31:08.199805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.540 [2024-11-20 07:31:08.199884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.540 [2024-11-20 07:31:08.199898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.540 [2024-11-20 07:31:08.199909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.540 [2024-11-20 07:31:08.199916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.540 [2024-11-20 07:31:08.199930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.540 qpair failed and we were unable to recover it. 00:30:33.540 [2024-11-20 07:31:08.209825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.540 [2024-11-20 07:31:08.209884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.540 [2024-11-20 07:31:08.209898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.540 [2024-11-20 07:31:08.209905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.540 [2024-11-20 07:31:08.209912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.540 [2024-11-20 07:31:08.209926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.540 qpair failed and we were unable to recover it. 00:30:33.540 [2024-11-20 07:31:08.219744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.540 [2024-11-20 07:31:08.219798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.540 [2024-11-20 07:31:08.219811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.540 [2024-11-20 07:31:08.219818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.540 [2024-11-20 07:31:08.219824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.540 [2024-11-20 07:31:08.219838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.540 qpair failed and we were unable to recover it. 00:30:33.540 [2024-11-20 07:31:08.229914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.540 [2024-11-20 07:31:08.229968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.540 [2024-11-20 07:31:08.229982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.540 [2024-11-20 07:31:08.229989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.540 [2024-11-20 07:31:08.229995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.540 [2024-11-20 07:31:08.230009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.540 qpair failed and we were unable to recover it. 00:30:33.540 [2024-11-20 07:31:08.239925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.540 [2024-11-20 07:31:08.239975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.540 [2024-11-20 07:31:08.239989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.540 [2024-11-20 07:31:08.239996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.540 [2024-11-20 07:31:08.240002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.540 [2024-11-20 07:31:08.240019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.540 qpair failed and we were unable to recover it. 00:30:33.540 [2024-11-20 07:31:08.249950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.540 [2024-11-20 07:31:08.250007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.540 [2024-11-20 07:31:08.250020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.540 [2024-11-20 07:31:08.250027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.540 [2024-11-20 07:31:08.250034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.540 [2024-11-20 07:31:08.250047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.540 qpair failed and we were unable to recover it. 00:30:33.540 [2024-11-20 07:31:08.259975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.540 [2024-11-20 07:31:08.260032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.540 [2024-11-20 07:31:08.260045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.540 [2024-11-20 07:31:08.260051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.540 [2024-11-20 07:31:08.260058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.540 [2024-11-20 07:31:08.260071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.540 qpair failed and we were unable to recover it. 00:30:33.540 [2024-11-20 07:31:08.270005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.540 [2024-11-20 07:31:08.270063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.540 [2024-11-20 07:31:08.270076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.540 [2024-11-20 07:31:08.270083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.540 [2024-11-20 07:31:08.270089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.540 [2024-11-20 07:31:08.270103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.540 qpair failed and we were unable to recover it. 00:30:33.540 [2024-11-20 07:31:08.280031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.540 [2024-11-20 07:31:08.280088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.540 [2024-11-20 07:31:08.280101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.540 [2024-11-20 07:31:08.280108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.540 [2024-11-20 07:31:08.280114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.540 [2024-11-20 07:31:08.280127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.540 qpair failed and we were unable to recover it. 00:30:33.540 [2024-11-20 07:31:08.290075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.540 [2024-11-20 07:31:08.290134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.540 [2024-11-20 07:31:08.290147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.540 [2024-11-20 07:31:08.290154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.540 [2024-11-20 07:31:08.290160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.540 [2024-11-20 07:31:08.290173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.540 qpair failed and we were unable to recover it. 00:30:33.540 [2024-11-20 07:31:08.300115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.540 [2024-11-20 07:31:08.300172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.540 [2024-11-20 07:31:08.300185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.540 [2024-11-20 07:31:08.300191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.540 [2024-11-20 07:31:08.300198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.540 [2024-11-20 07:31:08.300211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.540 qpair failed and we were unable to recover it. 00:30:33.804 [2024-11-20 07:31:08.310134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.804 [2024-11-20 07:31:08.310187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.804 [2024-11-20 07:31:08.310199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.804 [2024-11-20 07:31:08.310206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.804 [2024-11-20 07:31:08.310212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.804 [2024-11-20 07:31:08.310226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.804 qpair failed and we were unable to recover it. 00:30:33.804 [2024-11-20 07:31:08.320109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.804 [2024-11-20 07:31:08.320162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.804 [2024-11-20 07:31:08.320174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.804 [2024-11-20 07:31:08.320181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.804 [2024-11-20 07:31:08.320188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.804 [2024-11-20 07:31:08.320201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.804 qpair failed and we were unable to recover it. 00:30:33.804 [2024-11-20 07:31:08.330182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.804 [2024-11-20 07:31:08.330261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.804 [2024-11-20 07:31:08.330275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.804 [2024-11-20 07:31:08.330285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.804 [2024-11-20 07:31:08.330291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.804 [2024-11-20 07:31:08.330306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.804 qpair failed and we were unable to recover it. 00:30:33.804 [2024-11-20 07:31:08.340200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.804 [2024-11-20 07:31:08.340253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.804 [2024-11-20 07:31:08.340267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.804 [2024-11-20 07:31:08.340273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.804 [2024-11-20 07:31:08.340279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.804 [2024-11-20 07:31:08.340293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.804 qpair failed and we were unable to recover it. 00:30:33.805 [2024-11-20 07:31:08.350240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.805 [2024-11-20 07:31:08.350297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.805 [2024-11-20 07:31:08.350310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.805 [2024-11-20 07:31:08.350317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.805 [2024-11-20 07:31:08.350324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.805 [2024-11-20 07:31:08.350337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.805 qpair failed and we were unable to recover it. 00:30:33.805 [2024-11-20 07:31:08.360235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.805 [2024-11-20 07:31:08.360298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.805 [2024-11-20 07:31:08.360311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.805 [2024-11-20 07:31:08.360318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.805 [2024-11-20 07:31:08.360324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.805 [2024-11-20 07:31:08.360337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.805 qpair failed and we were unable to recover it. 00:30:33.805 [2024-11-20 07:31:08.370294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.805 [2024-11-20 07:31:08.370405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.805 [2024-11-20 07:31:08.370420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.805 [2024-11-20 07:31:08.370427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.805 [2024-11-20 07:31:08.370437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.805 [2024-11-20 07:31:08.370456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.805 qpair failed and we were unable to recover it. 00:30:33.805 [2024-11-20 07:31:08.380186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.805 [2024-11-20 07:31:08.380252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.805 [2024-11-20 07:31:08.380266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.805 [2024-11-20 07:31:08.380273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.805 [2024-11-20 07:31:08.380280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.805 [2024-11-20 07:31:08.380294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.805 qpair failed and we were unable to recover it. 00:30:33.805 [2024-11-20 07:31:08.390347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.805 [2024-11-20 07:31:08.390406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.805 [2024-11-20 07:31:08.390419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.805 [2024-11-20 07:31:08.390426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.805 [2024-11-20 07:31:08.390432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.805 [2024-11-20 07:31:08.390446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.805 qpair failed and we were unable to recover it. 00:30:33.805 [2024-11-20 07:31:08.400353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.805 [2024-11-20 07:31:08.400415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.805 [2024-11-20 07:31:08.400428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.805 [2024-11-20 07:31:08.400435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.805 [2024-11-20 07:31:08.400441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.805 [2024-11-20 07:31:08.400454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.805 qpair failed and we were unable to recover it. 00:30:33.805 [2024-11-20 07:31:08.410403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.805 [2024-11-20 07:31:08.410460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.805 [2024-11-20 07:31:08.410473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.805 [2024-11-20 07:31:08.410480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.805 [2024-11-20 07:31:08.410486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.805 [2024-11-20 07:31:08.410500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.805 qpair failed and we were unable to recover it. 00:30:33.805 [2024-11-20 07:31:08.420415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.805 [2024-11-20 07:31:08.420468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.805 [2024-11-20 07:31:08.420482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.805 [2024-11-20 07:31:08.420489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.805 [2024-11-20 07:31:08.420495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.805 [2024-11-20 07:31:08.420509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.805 qpair failed and we were unable to recover it. 00:30:33.805 [2024-11-20 07:31:08.430464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.805 [2024-11-20 07:31:08.430518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.805 [2024-11-20 07:31:08.430531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.805 [2024-11-20 07:31:08.430538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.805 [2024-11-20 07:31:08.430544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.805 [2024-11-20 07:31:08.430558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.805 qpair failed and we were unable to recover it. 00:30:33.805 [2024-11-20 07:31:08.440465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.805 [2024-11-20 07:31:08.440523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.805 [2024-11-20 07:31:08.440537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.805 [2024-11-20 07:31:08.440544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.805 [2024-11-20 07:31:08.440550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.805 [2024-11-20 07:31:08.440564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.805 qpair failed and we were unable to recover it. 00:30:33.805 [2024-11-20 07:31:08.450396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.805 [2024-11-20 07:31:08.450497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.805 [2024-11-20 07:31:08.450511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.805 [2024-11-20 07:31:08.450519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.805 [2024-11-20 07:31:08.450525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.805 [2024-11-20 07:31:08.450539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.805 qpair failed and we were unable to recover it. 00:30:33.805 [2024-11-20 07:31:08.460581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.805 [2024-11-20 07:31:08.460683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.805 [2024-11-20 07:31:08.460700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.805 [2024-11-20 07:31:08.460707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.805 [2024-11-20 07:31:08.460713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.805 [2024-11-20 07:31:08.460727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.805 qpair failed and we were unable to recover it. 00:30:33.805 [2024-11-20 07:31:08.470567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.805 [2024-11-20 07:31:08.470626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.805 [2024-11-20 07:31:08.470639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.805 [2024-11-20 07:31:08.470646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.805 [2024-11-20 07:31:08.470652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.805 [2024-11-20 07:31:08.470665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.805 qpair failed and we were unable to recover it. 00:30:33.806 [2024-11-20 07:31:08.480569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.806 [2024-11-20 07:31:08.480620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.806 [2024-11-20 07:31:08.480633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.806 [2024-11-20 07:31:08.480640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.806 [2024-11-20 07:31:08.480647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.806 [2024-11-20 07:31:08.480660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.806 qpair failed and we were unable to recover it. 00:30:33.806 [2024-11-20 07:31:08.490602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.806 [2024-11-20 07:31:08.490658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.806 [2024-11-20 07:31:08.490671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.806 [2024-11-20 07:31:08.490678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.806 [2024-11-20 07:31:08.490685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.806 [2024-11-20 07:31:08.490698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.806 qpair failed and we were unable to recover it. 00:30:33.806 [2024-11-20 07:31:08.500632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.806 [2024-11-20 07:31:08.500688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.806 [2024-11-20 07:31:08.500700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.806 [2024-11-20 07:31:08.500707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.806 [2024-11-20 07:31:08.500714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.806 [2024-11-20 07:31:08.500730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.806 qpair failed and we were unable to recover it. 00:30:33.806 [2024-11-20 07:31:08.510670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.806 [2024-11-20 07:31:08.510727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.806 [2024-11-20 07:31:08.510740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.806 [2024-11-20 07:31:08.510747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.806 [2024-11-20 07:31:08.510753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.806 [2024-11-20 07:31:08.510766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.806 qpair failed and we were unable to recover it. 00:30:33.806 [2024-11-20 07:31:08.520692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.806 [2024-11-20 07:31:08.520744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.806 [2024-11-20 07:31:08.520758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.806 [2024-11-20 07:31:08.520765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.806 [2024-11-20 07:31:08.520771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.806 [2024-11-20 07:31:08.520784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.806 qpair failed and we were unable to recover it. 00:30:33.806 [2024-11-20 07:31:08.530719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.806 [2024-11-20 07:31:08.530774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.806 [2024-11-20 07:31:08.530787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.806 [2024-11-20 07:31:08.530794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.806 [2024-11-20 07:31:08.530800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.806 [2024-11-20 07:31:08.530813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.806 qpair failed and we were unable to recover it. 00:30:33.806 [2024-11-20 07:31:08.540624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.806 [2024-11-20 07:31:08.540677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.806 [2024-11-20 07:31:08.540691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.806 [2024-11-20 07:31:08.540698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.806 [2024-11-20 07:31:08.540705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.806 [2024-11-20 07:31:08.540718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.806 qpair failed and we were unable to recover it. 00:30:33.806 [2024-11-20 07:31:08.550683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.806 [2024-11-20 07:31:08.550781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.806 [2024-11-20 07:31:08.550796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.806 [2024-11-20 07:31:08.550803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.806 [2024-11-20 07:31:08.550809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.806 [2024-11-20 07:31:08.550822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.806 qpair failed and we were unable to recover it. 00:30:33.806 [2024-11-20 07:31:08.560803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:33.806 [2024-11-20 07:31:08.560900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:33.806 [2024-11-20 07:31:08.560914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:33.806 [2024-11-20 07:31:08.560921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:33.806 [2024-11-20 07:31:08.560927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:33.806 [2024-11-20 07:31:08.560940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:33.806 qpair failed and we were unable to recover it. 00:30:34.069 [2024-11-20 07:31:08.570816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.069 [2024-11-20 07:31:08.570871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.069 [2024-11-20 07:31:08.570885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.069 [2024-11-20 07:31:08.570892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.069 [2024-11-20 07:31:08.570898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.069 [2024-11-20 07:31:08.570912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.069 qpair failed and we were unable to recover it. 00:30:34.069 [2024-11-20 07:31:08.580730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.069 [2024-11-20 07:31:08.580792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.069 [2024-11-20 07:31:08.580805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.069 [2024-11-20 07:31:08.580812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.069 [2024-11-20 07:31:08.580818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.069 [2024-11-20 07:31:08.580832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.069 qpair failed and we were unable to recover it. 00:30:34.069 [2024-11-20 07:31:08.590892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.069 [2024-11-20 07:31:08.590949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.069 [2024-11-20 07:31:08.590966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.069 [2024-11-20 07:31:08.590973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.069 [2024-11-20 07:31:08.590979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.069 [2024-11-20 07:31:08.590993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.069 qpair failed and we were unable to recover it. 00:30:34.069 [2024-11-20 07:31:08.600892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.069 [2024-11-20 07:31:08.600942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.069 [2024-11-20 07:31:08.600956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.069 [2024-11-20 07:31:08.600963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.069 [2024-11-20 07:31:08.600969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.069 [2024-11-20 07:31:08.600982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.069 qpair failed and we were unable to recover it. 00:30:34.069 [2024-11-20 07:31:08.610935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.069 [2024-11-20 07:31:08.610991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.069 [2024-11-20 07:31:08.611004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.069 [2024-11-20 07:31:08.611010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.069 [2024-11-20 07:31:08.611017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.069 [2024-11-20 07:31:08.611030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.069 qpair failed and we were unable to recover it. 00:30:34.069 [2024-11-20 07:31:08.620970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.069 [2024-11-20 07:31:08.621026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.069 [2024-11-20 07:31:08.621039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.070 [2024-11-20 07:31:08.621046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.070 [2024-11-20 07:31:08.621052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.070 [2024-11-20 07:31:08.621066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.070 qpair failed and we were unable to recover it. 00:30:34.070 [2024-11-20 07:31:08.630999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.070 [2024-11-20 07:31:08.631055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.070 [2024-11-20 07:31:08.631068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.070 [2024-11-20 07:31:08.631075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.070 [2024-11-20 07:31:08.631081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.070 [2024-11-20 07:31:08.631098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.070 qpair failed and we were unable to recover it. 00:30:34.070 [2024-11-20 07:31:08.641020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.070 [2024-11-20 07:31:08.641075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.070 [2024-11-20 07:31:08.641088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.070 [2024-11-20 07:31:08.641095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.070 [2024-11-20 07:31:08.641101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.070 [2024-11-20 07:31:08.641114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.070 qpair failed and we were unable to recover it. 00:30:34.070 [2024-11-20 07:31:08.650912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.070 [2024-11-20 07:31:08.650967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.070 [2024-11-20 07:31:08.650981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.070 [2024-11-20 07:31:08.650988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.070 [2024-11-20 07:31:08.650994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.070 [2024-11-20 07:31:08.651008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.070 qpair failed and we were unable to recover it. 00:30:34.070 [2024-11-20 07:31:08.661071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.070 [2024-11-20 07:31:08.661129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.070 [2024-11-20 07:31:08.661142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.070 [2024-11-20 07:31:08.661149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.070 [2024-11-20 07:31:08.661155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.070 [2024-11-20 07:31:08.661169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.070 qpair failed and we were unable to recover it. 00:30:34.070 [2024-11-20 07:31:08.671093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.070 [2024-11-20 07:31:08.671188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.070 [2024-11-20 07:31:08.671201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.070 [2024-11-20 07:31:08.671208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.070 [2024-11-20 07:31:08.671214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.070 [2024-11-20 07:31:08.671227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.070 qpair failed and we were unable to recover it. 00:30:34.070 [2024-11-20 07:31:08.681101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.070 [2024-11-20 07:31:08.681152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.070 [2024-11-20 07:31:08.681165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.070 [2024-11-20 07:31:08.681172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.070 [2024-11-20 07:31:08.681178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.070 [2024-11-20 07:31:08.681191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.070 qpair failed and we were unable to recover it. 00:30:34.070 [2024-11-20 07:31:08.691017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.070 [2024-11-20 07:31:08.691072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.070 [2024-11-20 07:31:08.691085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.070 [2024-11-20 07:31:08.691092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.070 [2024-11-20 07:31:08.691098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.070 [2024-11-20 07:31:08.691111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.070 qpair failed and we were unable to recover it. 00:30:34.070 [2024-11-20 07:31:08.701192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.070 [2024-11-20 07:31:08.701289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.070 [2024-11-20 07:31:08.701305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.070 [2024-11-20 07:31:08.701312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.070 [2024-11-20 07:31:08.701319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.070 [2024-11-20 07:31:08.701333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.070 qpair failed and we were unable to recover it. 00:30:34.070 [2024-11-20 07:31:08.711279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.070 [2024-11-20 07:31:08.711330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.070 [2024-11-20 07:31:08.711344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.070 [2024-11-20 07:31:08.711351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.070 [2024-11-20 07:31:08.711358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.070 [2024-11-20 07:31:08.711371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.070 qpair failed and we were unable to recover it. 00:30:34.070 [2024-11-20 07:31:08.721279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.070 [2024-11-20 07:31:08.721332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.070 [2024-11-20 07:31:08.721352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.070 [2024-11-20 07:31:08.721359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.070 [2024-11-20 07:31:08.721365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.070 [2024-11-20 07:31:08.721379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.070 qpair failed and we were unable to recover it. 00:30:34.070 [2024-11-20 07:31:08.731266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.070 [2024-11-20 07:31:08.731321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.070 [2024-11-20 07:31:08.731335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.070 [2024-11-20 07:31:08.731342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.070 [2024-11-20 07:31:08.731348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.070 [2024-11-20 07:31:08.731361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.070 qpair failed and we were unable to recover it. 00:30:34.070 [2024-11-20 07:31:08.741314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.070 [2024-11-20 07:31:08.741368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.070 [2024-11-20 07:31:08.741382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.070 [2024-11-20 07:31:08.741388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.070 [2024-11-20 07:31:08.741395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.070 [2024-11-20 07:31:08.741408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.070 qpair failed and we were unable to recover it. 00:30:34.070 [2024-11-20 07:31:08.751336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.070 [2024-11-20 07:31:08.751394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.070 [2024-11-20 07:31:08.751408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.071 [2024-11-20 07:31:08.751415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.071 [2024-11-20 07:31:08.751422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.071 [2024-11-20 07:31:08.751436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.071 qpair failed and we were unable to recover it. 00:30:34.071 [2024-11-20 07:31:08.761333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.071 [2024-11-20 07:31:08.761389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.071 [2024-11-20 07:31:08.761402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.071 [2024-11-20 07:31:08.761408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.071 [2024-11-20 07:31:08.761415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.071 [2024-11-20 07:31:08.761432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.071 qpair failed and we were unable to recover it. 00:30:34.071 [2024-11-20 07:31:08.771369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.071 [2024-11-20 07:31:08.771422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.071 [2024-11-20 07:31:08.771436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.071 [2024-11-20 07:31:08.771443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.071 [2024-11-20 07:31:08.771449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.071 [2024-11-20 07:31:08.771462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.071 qpair failed and we were unable to recover it. 00:30:34.071 [2024-11-20 07:31:08.781283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.071 [2024-11-20 07:31:08.781350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.071 [2024-11-20 07:31:08.781363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.071 [2024-11-20 07:31:08.781369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.071 [2024-11-20 07:31:08.781376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.071 [2024-11-20 07:31:08.781389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.071 qpair failed and we were unable to recover it. 00:30:34.071 [2024-11-20 07:31:08.791446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.071 [2024-11-20 07:31:08.791503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.071 [2024-11-20 07:31:08.791516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.071 [2024-11-20 07:31:08.791523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.071 [2024-11-20 07:31:08.791529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.071 [2024-11-20 07:31:08.791543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.071 qpair failed and we were unable to recover it. 00:30:34.071 [2024-11-20 07:31:08.801407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.071 [2024-11-20 07:31:08.801454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.071 [2024-11-20 07:31:08.801466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.071 [2024-11-20 07:31:08.801473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.071 [2024-11-20 07:31:08.801480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.071 [2024-11-20 07:31:08.801492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.071 qpair failed and we were unable to recover it. 00:30:34.071 [2024-11-20 07:31:08.811474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.071 [2024-11-20 07:31:08.811529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.071 [2024-11-20 07:31:08.811542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.071 [2024-11-20 07:31:08.811549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.071 [2024-11-20 07:31:08.811555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.071 [2024-11-20 07:31:08.811568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.071 qpair failed and we were unable to recover it. 00:30:34.071 [2024-11-20 07:31:08.821517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.071 [2024-11-20 07:31:08.821571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.071 [2024-11-20 07:31:08.821587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.071 [2024-11-20 07:31:08.821594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.071 [2024-11-20 07:31:08.821600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.071 [2024-11-20 07:31:08.821614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.071 qpair failed and we were unable to recover it. 00:30:34.071 [2024-11-20 07:31:08.831527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.071 [2024-11-20 07:31:08.831583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.071 [2024-11-20 07:31:08.831597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.071 [2024-11-20 07:31:08.831604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.071 [2024-11-20 07:31:08.831610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.071 [2024-11-20 07:31:08.831624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.071 qpair failed and we were unable to recover it. 00:30:34.335 [2024-11-20 07:31:08.841512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.335 [2024-11-20 07:31:08.841561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.335 [2024-11-20 07:31:08.841574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.335 [2024-11-20 07:31:08.841581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.335 [2024-11-20 07:31:08.841588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.335 [2024-11-20 07:31:08.841601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.335 qpair failed and we were unable to recover it. 00:30:34.335 [2024-11-20 07:31:08.851589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.335 [2024-11-20 07:31:08.851652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.335 [2024-11-20 07:31:08.851681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.335 [2024-11-20 07:31:08.851690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.335 [2024-11-20 07:31:08.851697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.335 [2024-11-20 07:31:08.851716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.335 qpair failed and we were unable to recover it. 00:30:34.335 [2024-11-20 07:31:08.861552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.335 [2024-11-20 07:31:08.861612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.335 [2024-11-20 07:31:08.861627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.335 [2024-11-20 07:31:08.861634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.335 [2024-11-20 07:31:08.861641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.335 [2024-11-20 07:31:08.861656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.335 qpair failed and we were unable to recover it. 00:30:34.335 [2024-11-20 07:31:08.871661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.335 [2024-11-20 07:31:08.871718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.335 [2024-11-20 07:31:08.871732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.335 [2024-11-20 07:31:08.871739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.335 [2024-11-20 07:31:08.871747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.335 [2024-11-20 07:31:08.871761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.335 qpair failed and we were unable to recover it. 00:30:34.335 [2024-11-20 07:31:08.881633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.335 [2024-11-20 07:31:08.881681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.335 [2024-11-20 07:31:08.881694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.335 [2024-11-20 07:31:08.881701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.335 [2024-11-20 07:31:08.881707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.335 [2024-11-20 07:31:08.881721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.335 qpair failed and we were unable to recover it. 00:30:34.335 [2024-11-20 07:31:08.891671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.335 [2024-11-20 07:31:08.891737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.335 [2024-11-20 07:31:08.891750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.335 [2024-11-20 07:31:08.891757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.335 [2024-11-20 07:31:08.891764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.335 [2024-11-20 07:31:08.891781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.335 qpair failed and we were unable to recover it. 00:30:34.335 [2024-11-20 07:31:08.901736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.335 [2024-11-20 07:31:08.901789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.335 [2024-11-20 07:31:08.901803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.335 [2024-11-20 07:31:08.901810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.335 [2024-11-20 07:31:08.901816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.335 [2024-11-20 07:31:08.901830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.335 qpair failed and we were unable to recover it. 00:30:34.335 [2024-11-20 07:31:08.911745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.335 [2024-11-20 07:31:08.911837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.335 [2024-11-20 07:31:08.911851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.336 [2024-11-20 07:31:08.911858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.336 [2024-11-20 07:31:08.911870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.336 [2024-11-20 07:31:08.911884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.336 qpair failed and we were unable to recover it. 00:30:34.336 [2024-11-20 07:31:08.921733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.336 [2024-11-20 07:31:08.921780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.336 [2024-11-20 07:31:08.921794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.336 [2024-11-20 07:31:08.921800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.336 [2024-11-20 07:31:08.921807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.336 [2024-11-20 07:31:08.921820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.336 qpair failed and we were unable to recover it. 00:30:34.336 [2024-11-20 07:31:08.931803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.336 [2024-11-20 07:31:08.931854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.336 [2024-11-20 07:31:08.931874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.336 [2024-11-20 07:31:08.931882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.336 [2024-11-20 07:31:08.931888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.336 [2024-11-20 07:31:08.931903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.336 qpair failed and we were unable to recover it. 00:30:34.336 [2024-11-20 07:31:08.941846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.336 [2024-11-20 07:31:08.941909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.336 [2024-11-20 07:31:08.941923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.336 [2024-11-20 07:31:08.941930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.336 [2024-11-20 07:31:08.941936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.336 [2024-11-20 07:31:08.941949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.336 qpair failed and we were unable to recover it. 00:30:34.336 [2024-11-20 07:31:08.951890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.336 [2024-11-20 07:31:08.951944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.336 [2024-11-20 07:31:08.951958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.336 [2024-11-20 07:31:08.951965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.336 [2024-11-20 07:31:08.951971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.336 [2024-11-20 07:31:08.951985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.336 qpair failed and we were unable to recover it. 00:30:34.336 [2024-11-20 07:31:08.961758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.336 [2024-11-20 07:31:08.961820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.336 [2024-11-20 07:31:08.961834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.336 [2024-11-20 07:31:08.961841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.336 [2024-11-20 07:31:08.961847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.336 [2024-11-20 07:31:08.961867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.336 qpair failed and we were unable to recover it. 00:30:34.336 [2024-11-20 07:31:08.971910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.336 [2024-11-20 07:31:08.971967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.336 [2024-11-20 07:31:08.971980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.336 [2024-11-20 07:31:08.971987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.336 [2024-11-20 07:31:08.971993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.336 [2024-11-20 07:31:08.972007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.336 qpair failed and we were unable to recover it. 00:30:34.336 [2024-11-20 07:31:08.981962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.336 [2024-11-20 07:31:08.982107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.336 [2024-11-20 07:31:08.982125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.336 [2024-11-20 07:31:08.982132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.336 [2024-11-20 07:31:08.982139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.336 [2024-11-20 07:31:08.982152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.336 qpair failed and we were unable to recover it. 00:30:34.336 [2024-11-20 07:31:08.991998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.336 [2024-11-20 07:31:08.992055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.336 [2024-11-20 07:31:08.992068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.336 [2024-11-20 07:31:08.992075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.336 [2024-11-20 07:31:08.992081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.336 [2024-11-20 07:31:08.992095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.336 qpair failed and we were unable to recover it. 00:30:34.336 [2024-11-20 07:31:09.001968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.336 [2024-11-20 07:31:09.002018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.336 [2024-11-20 07:31:09.002032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.336 [2024-11-20 07:31:09.002039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.336 [2024-11-20 07:31:09.002045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.336 [2024-11-20 07:31:09.002059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.336 qpair failed and we were unable to recover it. 00:30:34.336 [2024-11-20 07:31:09.012165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.336 [2024-11-20 07:31:09.012216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.336 [2024-11-20 07:31:09.012230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.336 [2024-11-20 07:31:09.012237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.336 [2024-11-20 07:31:09.012243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.336 [2024-11-20 07:31:09.012256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.336 qpair failed and we were unable to recover it. 00:30:34.336 [2024-11-20 07:31:09.022058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.336 [2024-11-20 07:31:09.022117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.336 [2024-11-20 07:31:09.022131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.336 [2024-11-20 07:31:09.022139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.336 [2024-11-20 07:31:09.022149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.336 [2024-11-20 07:31:09.022163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.336 qpair failed and we were unable to recover it. 00:30:34.336 [2024-11-20 07:31:09.032104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.336 [2024-11-20 07:31:09.032161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.336 [2024-11-20 07:31:09.032174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.336 [2024-11-20 07:31:09.032181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.336 [2024-11-20 07:31:09.032187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.336 [2024-11-20 07:31:09.032200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.336 qpair failed and we were unable to recover it. 00:30:34.336 [2024-11-20 07:31:09.042063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-11-20 07:31:09.042113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-11-20 07:31:09.042127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-11-20 07:31:09.042134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-11-20 07:31:09.042140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.337 [2024-11-20 07:31:09.042154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-11-20 07:31:09.052157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-11-20 07:31:09.052240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-11-20 07:31:09.052254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-11-20 07:31:09.052261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-11-20 07:31:09.052268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.337 [2024-11-20 07:31:09.052282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-11-20 07:31:09.062066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-11-20 07:31:09.062158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-11-20 07:31:09.062172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-11-20 07:31:09.062179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-11-20 07:31:09.062186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.337 [2024-11-20 07:31:09.062199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-11-20 07:31:09.072223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-11-20 07:31:09.072305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-11-20 07:31:09.072318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-11-20 07:31:09.072325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-11-20 07:31:09.072332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.337 [2024-11-20 07:31:09.072345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-11-20 07:31:09.082189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-11-20 07:31:09.082239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-11-20 07:31:09.082252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-11-20 07:31:09.082259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-11-20 07:31:09.082265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.337 [2024-11-20 07:31:09.082278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.337 [2024-11-20 07:31:09.092261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.337 [2024-11-20 07:31:09.092315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.337 [2024-11-20 07:31:09.092328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.337 [2024-11-20 07:31:09.092335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.337 [2024-11-20 07:31:09.092341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.337 [2024-11-20 07:31:09.092355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.337 qpair failed and we were unable to recover it. 00:30:34.598 [2024-11-20 07:31:09.102342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.598 [2024-11-20 07:31:09.102398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.598 [2024-11-20 07:31:09.102411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.598 [2024-11-20 07:31:09.102418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.598 [2024-11-20 07:31:09.102425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.598 [2024-11-20 07:31:09.102438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.598 qpair failed and we were unable to recover it. 00:30:34.598 [2024-11-20 07:31:09.112391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.598 [2024-11-20 07:31:09.112482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.598 [2024-11-20 07:31:09.112499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.598 [2024-11-20 07:31:09.112506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.598 [2024-11-20 07:31:09.112512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.598 [2024-11-20 07:31:09.112526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.598 qpair failed and we were unable to recover it. 00:30:34.598 [2024-11-20 07:31:09.122300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.598 [2024-11-20 07:31:09.122343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.598 [2024-11-20 07:31:09.122357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.598 [2024-11-20 07:31:09.122364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.598 [2024-11-20 07:31:09.122370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.599 [2024-11-20 07:31:09.122384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.599 qpair failed and we were unable to recover it. 00:30:34.599 [2024-11-20 07:31:09.132349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.599 [2024-11-20 07:31:09.132407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.599 [2024-11-20 07:31:09.132420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.599 [2024-11-20 07:31:09.132427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.599 [2024-11-20 07:31:09.132433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.599 [2024-11-20 07:31:09.132446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.599 qpair failed and we were unable to recover it. 00:30:34.599 [2024-11-20 07:31:09.142269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.599 [2024-11-20 07:31:09.142326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.599 [2024-11-20 07:31:09.142339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.599 [2024-11-20 07:31:09.142346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.599 [2024-11-20 07:31:09.142353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.599 [2024-11-20 07:31:09.142366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.599 qpair failed and we were unable to recover it. 00:30:34.599 [2024-11-20 07:31:09.152449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.599 [2024-11-20 07:31:09.152504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.599 [2024-11-20 07:31:09.152518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.599 [2024-11-20 07:31:09.152525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.599 [2024-11-20 07:31:09.152535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.599 [2024-11-20 07:31:09.152549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.599 qpair failed and we were unable to recover it. 00:30:34.599 [2024-11-20 07:31:09.162317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.599 [2024-11-20 07:31:09.162364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.599 [2024-11-20 07:31:09.162379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.599 [2024-11-20 07:31:09.162386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.599 [2024-11-20 07:31:09.162393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.599 [2024-11-20 07:31:09.162407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.599 qpair failed and we were unable to recover it. 00:30:34.599 [2024-11-20 07:31:09.172413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.599 [2024-11-20 07:31:09.172461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.599 [2024-11-20 07:31:09.172476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.599 [2024-11-20 07:31:09.172483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.599 [2024-11-20 07:31:09.172489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.599 [2024-11-20 07:31:09.172503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.599 qpair failed and we were unable to recover it. 00:30:34.599 [2024-11-20 07:31:09.182495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.599 [2024-11-20 07:31:09.182551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.599 [2024-11-20 07:31:09.182564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.599 [2024-11-20 07:31:09.182571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.599 [2024-11-20 07:31:09.182577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.599 [2024-11-20 07:31:09.182591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.599 qpair failed and we were unable to recover it. 00:30:34.599 [2024-11-20 07:31:09.192542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.599 [2024-11-20 07:31:09.192605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.599 [2024-11-20 07:31:09.192630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.599 [2024-11-20 07:31:09.192638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.599 [2024-11-20 07:31:09.192645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.599 [2024-11-20 07:31:09.192665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.599 qpair failed and we were unable to recover it. 00:30:34.599 [2024-11-20 07:31:09.202504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.599 [2024-11-20 07:31:09.202559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.599 [2024-11-20 07:31:09.202576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.599 [2024-11-20 07:31:09.202583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.599 [2024-11-20 07:31:09.202590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.599 [2024-11-20 07:31:09.202605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.599 qpair failed and we were unable to recover it. 00:30:34.599 [2024-11-20 07:31:09.212414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.600 [2024-11-20 07:31:09.212504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.600 [2024-11-20 07:31:09.212518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.600 [2024-11-20 07:31:09.212525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.600 [2024-11-20 07:31:09.212531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.600 [2024-11-20 07:31:09.212546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.600 qpair failed and we were unable to recover it. 00:30:34.600 [2024-11-20 07:31:09.222609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.600 [2024-11-20 07:31:09.222666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.600 [2024-11-20 07:31:09.222679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.600 [2024-11-20 07:31:09.222686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.600 [2024-11-20 07:31:09.222693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.600 [2024-11-20 07:31:09.222706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.600 qpair failed and we were unable to recover it. 00:30:34.600 [2024-11-20 07:31:09.232634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.600 [2024-11-20 07:31:09.232696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.600 [2024-11-20 07:31:09.232722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.600 [2024-11-20 07:31:09.232730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.600 [2024-11-20 07:31:09.232737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.600 [2024-11-20 07:31:09.232756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.600 qpair failed and we were unable to recover it. 00:30:34.600 [2024-11-20 07:31:09.242611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.600 [2024-11-20 07:31:09.242676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.600 [2024-11-20 07:31:09.242697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.600 [2024-11-20 07:31:09.242704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.600 [2024-11-20 07:31:09.242710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.600 [2024-11-20 07:31:09.242726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.600 qpair failed and we were unable to recover it. 00:30:34.600 [2024-11-20 07:31:09.252648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.600 [2024-11-20 07:31:09.252697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.600 [2024-11-20 07:31:09.252711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.600 [2024-11-20 07:31:09.252718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.600 [2024-11-20 07:31:09.252725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.600 [2024-11-20 07:31:09.252739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.600 qpair failed and we were unable to recover it. 00:30:34.600 [2024-11-20 07:31:09.262686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.600 [2024-11-20 07:31:09.262753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.600 [2024-11-20 07:31:09.262766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.600 [2024-11-20 07:31:09.262773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.600 [2024-11-20 07:31:09.262780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.600 [2024-11-20 07:31:09.262793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.600 qpair failed and we were unable to recover it. 00:30:34.600 [2024-11-20 07:31:09.272721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.600 [2024-11-20 07:31:09.272771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.600 [2024-11-20 07:31:09.272785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.600 [2024-11-20 07:31:09.272792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.600 [2024-11-20 07:31:09.272799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.600 [2024-11-20 07:31:09.272813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.600 qpair failed and we were unable to recover it. 00:30:34.600 [2024-11-20 07:31:09.282738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.600 [2024-11-20 07:31:09.282834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.600 [2024-11-20 07:31:09.282847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.600 [2024-11-20 07:31:09.282855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.600 [2024-11-20 07:31:09.282871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.600 [2024-11-20 07:31:09.282886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.600 qpair failed and we were unable to recover it. 00:30:34.600 [2024-11-20 07:31:09.292766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.600 [2024-11-20 07:31:09.292817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.600 [2024-11-20 07:31:09.292831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.600 [2024-11-20 07:31:09.292838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.600 [2024-11-20 07:31:09.292844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.600 [2024-11-20 07:31:09.292858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.600 qpair failed and we were unable to recover it. 00:30:34.600 [2024-11-20 07:31:09.302836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.600 [2024-11-20 07:31:09.302900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.600 [2024-11-20 07:31:09.302914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.600 [2024-11-20 07:31:09.302921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.600 [2024-11-20 07:31:09.302927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.600 [2024-11-20 07:31:09.302941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.600 qpair failed and we were unable to recover it. 00:30:34.600 [2024-11-20 07:31:09.312838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.601 [2024-11-20 07:31:09.312890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.601 [2024-11-20 07:31:09.312903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.601 [2024-11-20 07:31:09.312910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.601 [2024-11-20 07:31:09.312917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.601 [2024-11-20 07:31:09.312930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.601 qpair failed and we were unable to recover it. 00:30:34.601 [2024-11-20 07:31:09.322756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.601 [2024-11-20 07:31:09.322805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.601 [2024-11-20 07:31:09.322819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.601 [2024-11-20 07:31:09.322826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.601 [2024-11-20 07:31:09.322832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.601 [2024-11-20 07:31:09.322846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.601 qpair failed and we were unable to recover it. 00:30:34.601 [2024-11-20 07:31:09.332843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.601 [2024-11-20 07:31:09.332897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.601 [2024-11-20 07:31:09.332911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.601 [2024-11-20 07:31:09.332918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.601 [2024-11-20 07:31:09.332925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.601 [2024-11-20 07:31:09.332939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.601 qpair failed and we were unable to recover it. 00:30:34.601 [2024-11-20 07:31:09.342952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.601 [2024-11-20 07:31:09.343005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.601 [2024-11-20 07:31:09.343019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.601 [2024-11-20 07:31:09.343026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.601 [2024-11-20 07:31:09.343032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.601 [2024-11-20 07:31:09.343046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.601 qpair failed and we were unable to recover it. 00:30:34.601 [2024-11-20 07:31:09.352928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.601 [2024-11-20 07:31:09.352982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.601 [2024-11-20 07:31:09.352996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.601 [2024-11-20 07:31:09.353003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.601 [2024-11-20 07:31:09.353009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.601 [2024-11-20 07:31:09.353022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.601 qpair failed and we were unable to recover it. 00:30:34.864 [2024-11-20 07:31:09.362954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.864 [2024-11-20 07:31:09.363033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.864 [2024-11-20 07:31:09.363046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.864 [2024-11-20 07:31:09.363053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.864 [2024-11-20 07:31:09.363059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.864 [2024-11-20 07:31:09.363073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.864 qpair failed and we were unable to recover it. 00:30:34.864 [2024-11-20 07:31:09.372853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.864 [2024-11-20 07:31:09.372908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.864 [2024-11-20 07:31:09.372929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.864 [2024-11-20 07:31:09.372936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.864 [2024-11-20 07:31:09.372942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.864 [2024-11-20 07:31:09.372956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.864 qpair failed and we were unable to recover it. 00:30:34.864 [2024-11-20 07:31:09.383038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.864 [2024-11-20 07:31:09.383094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.864 [2024-11-20 07:31:09.383107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.864 [2024-11-20 07:31:09.383115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.864 [2024-11-20 07:31:09.383121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.864 [2024-11-20 07:31:09.383134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.864 qpair failed and we were unable to recover it. 00:30:34.864 [2024-11-20 07:31:09.393014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.864 [2024-11-20 07:31:09.393066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.864 [2024-11-20 07:31:09.393080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.864 [2024-11-20 07:31:09.393086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.864 [2024-11-20 07:31:09.393093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.864 [2024-11-20 07:31:09.393106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.864 qpair failed and we were unable to recover it. 00:30:34.864 [2024-11-20 07:31:09.403069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.864 [2024-11-20 07:31:09.403189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.864 [2024-11-20 07:31:09.403203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.864 [2024-11-20 07:31:09.403210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.864 [2024-11-20 07:31:09.403216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.864 [2024-11-20 07:31:09.403229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.864 qpair failed and we were unable to recover it. 00:30:34.864 [2024-11-20 07:31:09.413062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.864 [2024-11-20 07:31:09.413109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.864 [2024-11-20 07:31:09.413122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.864 [2024-11-20 07:31:09.413129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.864 [2024-11-20 07:31:09.413139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.864 [2024-11-20 07:31:09.413152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.864 qpair failed and we were unable to recover it. 00:30:34.864 [2024-11-20 07:31:09.423280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.864 [2024-11-20 07:31:09.423342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.864 [2024-11-20 07:31:09.423355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.864 [2024-11-20 07:31:09.423362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.864 [2024-11-20 07:31:09.423368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.864 [2024-11-20 07:31:09.423381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.864 qpair failed and we were unable to recover it. 00:30:34.864 [2024-11-20 07:31:09.433147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.864 [2024-11-20 07:31:09.433246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.864 [2024-11-20 07:31:09.433259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.864 [2024-11-20 07:31:09.433267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.864 [2024-11-20 07:31:09.433273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.864 [2024-11-20 07:31:09.433286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.864 qpair failed and we were unable to recover it. 00:30:34.864 [2024-11-20 07:31:09.443169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.864 [2024-11-20 07:31:09.443216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.864 [2024-11-20 07:31:09.443229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.864 [2024-11-20 07:31:09.443236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.864 [2024-11-20 07:31:09.443242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.864 [2024-11-20 07:31:09.443255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.864 qpair failed and we were unable to recover it. 00:30:34.864 [2024-11-20 07:31:09.453059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.864 [2024-11-20 07:31:09.453107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.864 [2024-11-20 07:31:09.453121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.864 [2024-11-20 07:31:09.453128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.864 [2024-11-20 07:31:09.453134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.864 [2024-11-20 07:31:09.453148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.864 qpair failed and we were unable to recover it. 00:30:34.865 [2024-11-20 07:31:09.463242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.865 [2024-11-20 07:31:09.463306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.865 [2024-11-20 07:31:09.463320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.865 [2024-11-20 07:31:09.463327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.865 [2024-11-20 07:31:09.463333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.865 [2024-11-20 07:31:09.463347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.865 qpair failed and we were unable to recover it. 00:30:34.865 [2024-11-20 07:31:09.473272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.865 [2024-11-20 07:31:09.473372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.865 [2024-11-20 07:31:09.473385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.865 [2024-11-20 07:31:09.473392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.865 [2024-11-20 07:31:09.473398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.865 [2024-11-20 07:31:09.473411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.865 qpair failed and we were unable to recover it. 00:30:34.865 [2024-11-20 07:31:09.483253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.865 [2024-11-20 07:31:09.483337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.865 [2024-11-20 07:31:09.483351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.865 [2024-11-20 07:31:09.483358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.865 [2024-11-20 07:31:09.483364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.865 [2024-11-20 07:31:09.483378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.865 qpair failed and we were unable to recover it. 00:30:34.865 [2024-11-20 07:31:09.493287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.865 [2024-11-20 07:31:09.493366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.865 [2024-11-20 07:31:09.493380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.865 [2024-11-20 07:31:09.493386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.865 [2024-11-20 07:31:09.493393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.865 [2024-11-20 07:31:09.493407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.865 qpair failed and we were unable to recover it. 00:30:34.865 [2024-11-20 07:31:09.503350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.865 [2024-11-20 07:31:09.503402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.865 [2024-11-20 07:31:09.503419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.865 [2024-11-20 07:31:09.503426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.865 [2024-11-20 07:31:09.503432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.865 [2024-11-20 07:31:09.503446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.865 qpair failed and we were unable to recover it. 00:30:34.865 [2024-11-20 07:31:09.513335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.865 [2024-11-20 07:31:09.513384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.865 [2024-11-20 07:31:09.513397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.865 [2024-11-20 07:31:09.513404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.865 [2024-11-20 07:31:09.513410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.865 [2024-11-20 07:31:09.513423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.865 qpair failed and we were unable to recover it. 00:30:34.865 [2024-11-20 07:31:09.523357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.865 [2024-11-20 07:31:09.523406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.865 [2024-11-20 07:31:09.523419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.865 [2024-11-20 07:31:09.523426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.865 [2024-11-20 07:31:09.523432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.865 [2024-11-20 07:31:09.523446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.865 qpair failed and we were unable to recover it. 00:30:34.865 [2024-11-20 07:31:09.533390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.865 [2024-11-20 07:31:09.533450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.865 [2024-11-20 07:31:09.533464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.865 [2024-11-20 07:31:09.533471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.865 [2024-11-20 07:31:09.533477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.865 [2024-11-20 07:31:09.533491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.865 qpair failed and we were unable to recover it. 00:30:34.865 [2024-11-20 07:31:09.543447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.865 [2024-11-20 07:31:09.543504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.865 [2024-11-20 07:31:09.543517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.865 [2024-11-20 07:31:09.543524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.865 [2024-11-20 07:31:09.543534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.865 [2024-11-20 07:31:09.543547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.865 qpair failed and we were unable to recover it. 00:30:34.865 [2024-11-20 07:31:09.553456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.865 [2024-11-20 07:31:09.553552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.865 [2024-11-20 07:31:09.553566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.865 [2024-11-20 07:31:09.553573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.865 [2024-11-20 07:31:09.553579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.865 [2024-11-20 07:31:09.553593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.865 qpair failed and we were unable to recover it. 00:30:34.865 [2024-11-20 07:31:09.563383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.865 [2024-11-20 07:31:09.563431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.865 [2024-11-20 07:31:09.563444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.865 [2024-11-20 07:31:09.563451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.865 [2024-11-20 07:31:09.563457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.865 [2024-11-20 07:31:09.563471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.865 qpair failed and we were unable to recover it. 00:30:34.865 [2024-11-20 07:31:09.573492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.865 [2024-11-20 07:31:09.573542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.865 [2024-11-20 07:31:09.573555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.865 [2024-11-20 07:31:09.573562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.865 [2024-11-20 07:31:09.573568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.865 [2024-11-20 07:31:09.573582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.865 qpair failed and we were unable to recover it. 00:30:34.865 [2024-11-20 07:31:09.583434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.865 [2024-11-20 07:31:09.583505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.865 [2024-11-20 07:31:09.583519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.865 [2024-11-20 07:31:09.583526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.865 [2024-11-20 07:31:09.583533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.865 [2024-11-20 07:31:09.583546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.865 qpair failed and we were unable to recover it. 00:30:34.866 [2024-11-20 07:31:09.593562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.866 [2024-11-20 07:31:09.593622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.866 [2024-11-20 07:31:09.593648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.866 [2024-11-20 07:31:09.593656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.866 [2024-11-20 07:31:09.593663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.866 [2024-11-20 07:31:09.593682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.866 qpair failed and we were unable to recover it. 00:30:34.866 [2024-11-20 07:31:09.603486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.866 [2024-11-20 07:31:09.603591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.866 [2024-11-20 07:31:09.603616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.866 [2024-11-20 07:31:09.603625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.866 [2024-11-20 07:31:09.603632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.866 [2024-11-20 07:31:09.603651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.866 qpair failed and we were unable to recover it. 00:30:34.866 [2024-11-20 07:31:09.613599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.866 [2024-11-20 07:31:09.613653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.866 [2024-11-20 07:31:09.613671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.866 [2024-11-20 07:31:09.613678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.866 [2024-11-20 07:31:09.613685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.866 [2024-11-20 07:31:09.613701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.866 qpair failed and we were unable to recover it. 00:30:34.866 [2024-11-20 07:31:09.623485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:34.866 [2024-11-20 07:31:09.623531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:34.866 [2024-11-20 07:31:09.623545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:34.866 [2024-11-20 07:31:09.623552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:34.866 [2024-11-20 07:31:09.623558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:34.866 [2024-11-20 07:31:09.623572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:34.866 qpair failed and we were unable to recover it. 00:30:35.129 [2024-11-20 07:31:09.633647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.129 [2024-11-20 07:31:09.633703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.129 [2024-11-20 07:31:09.633723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.129 [2024-11-20 07:31:09.633730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.129 [2024-11-20 07:31:09.633736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.129 [2024-11-20 07:31:09.633751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.129 qpair failed and we were unable to recover it. 00:30:35.129 [2024-11-20 07:31:09.643721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.129 [2024-11-20 07:31:09.643792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.129 [2024-11-20 07:31:09.643805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.129 [2024-11-20 07:31:09.643812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.129 [2024-11-20 07:31:09.643818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.129 [2024-11-20 07:31:09.643832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.129 qpair failed and we were unable to recover it. 00:30:35.129 [2024-11-20 07:31:09.653576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.130 [2024-11-20 07:31:09.653624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.130 [2024-11-20 07:31:09.653637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.130 [2024-11-20 07:31:09.653644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.130 [2024-11-20 07:31:09.653650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.130 [2024-11-20 07:31:09.653664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.130 qpair failed and we were unable to recover it. 00:30:35.130 [2024-11-20 07:31:09.663621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.130 [2024-11-20 07:31:09.663676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.130 [2024-11-20 07:31:09.663689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.130 [2024-11-20 07:31:09.663696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.130 [2024-11-20 07:31:09.663702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.130 [2024-11-20 07:31:09.663715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.130 qpair failed and we were unable to recover it. 00:30:35.130 [2024-11-20 07:31:09.673745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.130 [2024-11-20 07:31:09.673789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.130 [2024-11-20 07:31:09.673803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.130 [2024-11-20 07:31:09.673810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.130 [2024-11-20 07:31:09.673820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.130 [2024-11-20 07:31:09.673833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.130 qpair failed and we were unable to recover it. 00:30:35.130 [2024-11-20 07:31:09.683774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.130 [2024-11-20 07:31:09.683822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.130 [2024-11-20 07:31:09.683836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.130 [2024-11-20 07:31:09.683843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.130 [2024-11-20 07:31:09.683849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.130 [2024-11-20 07:31:09.683868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.130 qpair failed and we were unable to recover it. 00:30:35.130 [2024-11-20 07:31:09.693819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.130 [2024-11-20 07:31:09.693873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.130 [2024-11-20 07:31:09.693887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.130 [2024-11-20 07:31:09.693895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.130 [2024-11-20 07:31:09.693901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.130 [2024-11-20 07:31:09.693915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.130 qpair failed and we were unable to recover it. 00:30:35.130 [2024-11-20 07:31:09.703839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.130 [2024-11-20 07:31:09.703896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.130 [2024-11-20 07:31:09.703909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.130 [2024-11-20 07:31:09.703916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.130 [2024-11-20 07:31:09.703923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.130 [2024-11-20 07:31:09.703936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.130 qpair failed and we were unable to recover it. 00:30:35.130 [2024-11-20 07:31:09.713788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.130 [2024-11-20 07:31:09.713836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.130 [2024-11-20 07:31:09.713849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.130 [2024-11-20 07:31:09.713856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.130 [2024-11-20 07:31:09.713867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.130 [2024-11-20 07:31:09.713881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.130 qpair failed and we were unable to recover it. 00:30:35.130 [2024-11-20 07:31:09.723860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.130 [2024-11-20 07:31:09.723908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.130 [2024-11-20 07:31:09.723923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.130 [2024-11-20 07:31:09.723930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.130 [2024-11-20 07:31:09.723936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.130 [2024-11-20 07:31:09.723950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.130 qpair failed and we were unable to recover it. 00:30:35.130 [2024-11-20 07:31:09.733915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.130 [2024-11-20 07:31:09.734006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.130 [2024-11-20 07:31:09.734020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.130 [2024-11-20 07:31:09.734027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.130 [2024-11-20 07:31:09.734034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.130 [2024-11-20 07:31:09.734048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.130 qpair failed and we were unable to recover it. 00:30:35.130 [2024-11-20 07:31:09.743928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.130 [2024-11-20 07:31:09.743974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.130 [2024-11-20 07:31:09.743988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.130 [2024-11-20 07:31:09.743995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.130 [2024-11-20 07:31:09.744001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.130 [2024-11-20 07:31:09.744015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.130 qpair failed and we were unable to recover it. 00:30:35.130 [2024-11-20 07:31:09.753979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.130 [2024-11-20 07:31:09.754068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.130 [2024-11-20 07:31:09.754081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.130 [2024-11-20 07:31:09.754088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.130 [2024-11-20 07:31:09.754094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.130 [2024-11-20 07:31:09.754108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.130 qpair failed and we were unable to recover it. 00:30:35.130 [2024-11-20 07:31:09.763974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.130 [2024-11-20 07:31:09.764018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.130 [2024-11-20 07:31:09.764034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.130 [2024-11-20 07:31:09.764041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.130 [2024-11-20 07:31:09.764047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.130 [2024-11-20 07:31:09.764061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.130 qpair failed and we were unable to recover it. 00:30:35.130 [2024-11-20 07:31:09.774006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.130 [2024-11-20 07:31:09.774049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.130 [2024-11-20 07:31:09.774063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.130 [2024-11-20 07:31:09.774069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.130 [2024-11-20 07:31:09.774076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.130 [2024-11-20 07:31:09.774089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.130 qpair failed and we were unable to recover it. 00:30:35.130 [2024-11-20 07:31:09.784026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.131 [2024-11-20 07:31:09.784074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.131 [2024-11-20 07:31:09.784087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.131 [2024-11-20 07:31:09.784094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.131 [2024-11-20 07:31:09.784100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.131 [2024-11-20 07:31:09.784114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.131 qpair failed and we were unable to recover it. 00:30:35.131 [2024-11-20 07:31:09.794097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.131 [2024-11-20 07:31:09.794147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.131 [2024-11-20 07:31:09.794161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.131 [2024-11-20 07:31:09.794168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.131 [2024-11-20 07:31:09.794174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.131 [2024-11-20 07:31:09.794187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.131 qpair failed and we were unable to recover it. 00:30:35.131 [2024-11-20 07:31:09.804092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.131 [2024-11-20 07:31:09.804137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.131 [2024-11-20 07:31:09.804151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.131 [2024-11-20 07:31:09.804158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.131 [2024-11-20 07:31:09.804167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.131 [2024-11-20 07:31:09.804181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.131 qpair failed and we were unable to recover it. 00:30:35.131 [2024-11-20 07:31:09.814155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.131 [2024-11-20 07:31:09.814205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.131 [2024-11-20 07:31:09.814221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.131 [2024-11-20 07:31:09.814228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.131 [2024-11-20 07:31:09.814234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.131 [2024-11-20 07:31:09.814248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.131 qpair failed and we were unable to recover it. 00:30:35.131 [2024-11-20 07:31:09.824025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.131 [2024-11-20 07:31:09.824072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.131 [2024-11-20 07:31:09.824086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.131 [2024-11-20 07:31:09.824093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.131 [2024-11-20 07:31:09.824099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.131 [2024-11-20 07:31:09.824113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.131 qpair failed and we were unable to recover it. 00:30:35.131 [2024-11-20 07:31:09.834193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.131 [2024-11-20 07:31:09.834271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.131 [2024-11-20 07:31:09.834284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.131 [2024-11-20 07:31:09.834291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.131 [2024-11-20 07:31:09.834297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.131 [2024-11-20 07:31:09.834311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.131 qpair failed and we were unable to recover it. 00:30:35.131 [2024-11-20 07:31:09.844187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.131 [2024-11-20 07:31:09.844229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.131 [2024-11-20 07:31:09.844242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.131 [2024-11-20 07:31:09.844249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.131 [2024-11-20 07:31:09.844255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.131 [2024-11-20 07:31:09.844269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.131 qpair failed and we were unable to recover it. 00:30:35.131 [2024-11-20 07:31:09.854248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.131 [2024-11-20 07:31:09.854313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.131 [2024-11-20 07:31:09.854326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.131 [2024-11-20 07:31:09.854333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.131 [2024-11-20 07:31:09.854339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.131 [2024-11-20 07:31:09.854352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.131 qpair failed and we were unable to recover it. 00:30:35.131 [2024-11-20 07:31:09.864265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.131 [2024-11-20 07:31:09.864310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.131 [2024-11-20 07:31:09.864323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.131 [2024-11-20 07:31:09.864330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.131 [2024-11-20 07:31:09.864336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.131 [2024-11-20 07:31:09.864350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.131 qpair failed and we were unable to recover it. 00:30:35.131 [2024-11-20 07:31:09.874283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.131 [2024-11-20 07:31:09.874359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.131 [2024-11-20 07:31:09.874371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.131 [2024-11-20 07:31:09.874378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.131 [2024-11-20 07:31:09.874385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.131 [2024-11-20 07:31:09.874398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.131 qpair failed and we were unable to recover it. 00:30:35.131 [2024-11-20 07:31:09.884317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.131 [2024-11-20 07:31:09.884369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.131 [2024-11-20 07:31:09.884382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.131 [2024-11-20 07:31:09.884389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.131 [2024-11-20 07:31:09.884395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.131 [2024-11-20 07:31:09.884408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.131 qpair failed and we were unable to recover it. 00:30:35.401 [2024-11-20 07:31:09.894357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.401 [2024-11-20 07:31:09.894402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.401 [2024-11-20 07:31:09.894420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.401 [2024-11-20 07:31:09.894426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.401 [2024-11-20 07:31:09.894433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.401 [2024-11-20 07:31:09.894447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-11-20 07:31:09.904375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.401 [2024-11-20 07:31:09.904425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.401 [2024-11-20 07:31:09.904438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.401 [2024-11-20 07:31:09.904445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.401 [2024-11-20 07:31:09.904451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.401 [2024-11-20 07:31:09.904465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-11-20 07:31:09.914336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.401 [2024-11-20 07:31:09.914381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.401 [2024-11-20 07:31:09.914394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.401 [2024-11-20 07:31:09.914401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.401 [2024-11-20 07:31:09.914407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.401 [2024-11-20 07:31:09.914421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-11-20 07:31:09.924396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.401 [2024-11-20 07:31:09.924438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.401 [2024-11-20 07:31:09.924451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.401 [2024-11-20 07:31:09.924458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.401 [2024-11-20 07:31:09.924464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.401 [2024-11-20 07:31:09.924478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-11-20 07:31:09.934441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.401 [2024-11-20 07:31:09.934483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.401 [2024-11-20 07:31:09.934496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.401 [2024-11-20 07:31:09.934503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.401 [2024-11-20 07:31:09.934513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.401 [2024-11-20 07:31:09.934526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-11-20 07:31:09.944464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.401 [2024-11-20 07:31:09.944511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.401 [2024-11-20 07:31:09.944524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.401 [2024-11-20 07:31:09.944531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.401 [2024-11-20 07:31:09.944537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.401 [2024-11-20 07:31:09.944550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-11-20 07:31:09.954477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.401 [2024-11-20 07:31:09.954525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.401 [2024-11-20 07:31:09.954538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.401 [2024-11-20 07:31:09.954545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.401 [2024-11-20 07:31:09.954551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.401 [2024-11-20 07:31:09.954565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-11-20 07:31:09.964419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.401 [2024-11-20 07:31:09.964484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.401 [2024-11-20 07:31:09.964497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.401 [2024-11-20 07:31:09.964505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.401 [2024-11-20 07:31:09.964511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.401 [2024-11-20 07:31:09.964524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-11-20 07:31:09.974550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.401 [2024-11-20 07:31:09.974593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.401 [2024-11-20 07:31:09.974606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.401 [2024-11-20 07:31:09.974613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.401 [2024-11-20 07:31:09.974619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.401 [2024-11-20 07:31:09.974632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-11-20 07:31:09.984568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.401 [2024-11-20 07:31:09.984613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.401 [2024-11-20 07:31:09.984626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.401 [2024-11-20 07:31:09.984633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.401 [2024-11-20 07:31:09.984639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.401 [2024-11-20 07:31:09.984653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-11-20 07:31:09.994583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.401 [2024-11-20 07:31:09.994643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.401 [2024-11-20 07:31:09.994668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.401 [2024-11-20 07:31:09.994677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.401 [2024-11-20 07:31:09.994683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.401 [2024-11-20 07:31:09.994703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.401 qpair failed and we were unable to recover it. 00:30:35.401 [2024-11-20 07:31:10.004646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.401 [2024-11-20 07:31:10.004701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.401 [2024-11-20 07:31:10.004726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.401 [2024-11-20 07:31:10.004735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.401 [2024-11-20 07:31:10.004742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.004761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.014526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.014572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.014589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.014596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.014603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.014618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.024679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.024769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.024791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.024799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.024805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.024821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.034726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.034775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.034788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.034795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.034802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.034816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.044734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.044809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.044824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.044831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.044838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.044851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.054628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.054676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.054690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.054698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.054704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.054718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.064688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.064747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.064761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.064768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.064778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.064792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.074797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.074848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.074868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.074875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.074881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.074896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.084851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.084900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.084913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.084920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.084927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.084940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.094890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.094941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.094954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.094961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.094968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.094981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.104867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.104920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.104938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.104945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.104951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.104967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.114793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.114841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.114854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.114867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.114874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.114888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.124941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.125028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.125042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.125049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.125056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.125070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.134953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.135029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.135042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.135049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.135055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.135069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.144999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.145045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.145058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.145065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.145072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.145085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.402 [2024-11-20 07:31:10.155048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.402 [2024-11-20 07:31:10.155115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.402 [2024-11-20 07:31:10.155132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.402 [2024-11-20 07:31:10.155139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.402 [2024-11-20 07:31:10.155146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.402 [2024-11-20 07:31:10.155159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.402 qpair failed and we were unable to recover it. 00:30:35.666 [2024-11-20 07:31:10.164921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.666 [2024-11-20 07:31:10.164971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.666 [2024-11-20 07:31:10.164984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.666 [2024-11-20 07:31:10.164991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.666 [2024-11-20 07:31:10.164997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.666 [2024-11-20 07:31:10.165011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-11-20 07:31:10.175125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.666 [2024-11-20 07:31:10.175171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.666 [2024-11-20 07:31:10.175184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.666 [2024-11-20 07:31:10.175191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.666 [2024-11-20 07:31:10.175197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.666 [2024-11-20 07:31:10.175211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-11-20 07:31:10.185114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.666 [2024-11-20 07:31:10.185159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.666 [2024-11-20 07:31:10.185171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.666 [2024-11-20 07:31:10.185179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.666 [2024-11-20 07:31:10.185185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.666 [2024-11-20 07:31:10.185198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-11-20 07:31:10.195145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.666 [2024-11-20 07:31:10.195210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.666 [2024-11-20 07:31:10.195223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.666 [2024-11-20 07:31:10.195230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.666 [2024-11-20 07:31:10.195240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.666 [2024-11-20 07:31:10.195254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-11-20 07:31:10.205037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.666 [2024-11-20 07:31:10.205092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.666 [2024-11-20 07:31:10.205105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.666 [2024-11-20 07:31:10.205112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.666 [2024-11-20 07:31:10.205118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.666 [2024-11-20 07:31:10.205131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-11-20 07:31:10.215146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.666 [2024-11-20 07:31:10.215190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.666 [2024-11-20 07:31:10.215203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.666 [2024-11-20 07:31:10.215210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.666 [2024-11-20 07:31:10.215216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.666 [2024-11-20 07:31:10.215229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-11-20 07:31:10.225198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.666 [2024-11-20 07:31:10.225245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.666 [2024-11-20 07:31:10.225258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.666 [2024-11-20 07:31:10.225265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.666 [2024-11-20 07:31:10.225271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.666 [2024-11-20 07:31:10.225284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-11-20 07:31:10.235235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.666 [2024-11-20 07:31:10.235333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.666 [2024-11-20 07:31:10.235347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.666 [2024-11-20 07:31:10.235354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.666 [2024-11-20 07:31:10.235361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.666 [2024-11-20 07:31:10.235374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-11-20 07:31:10.245260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.666 [2024-11-20 07:31:10.245305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.666 [2024-11-20 07:31:10.245319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.666 [2024-11-20 07:31:10.245325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.666 [2024-11-20 07:31:10.245332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.666 [2024-11-20 07:31:10.245345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-11-20 07:31:10.255275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.666 [2024-11-20 07:31:10.255322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.666 [2024-11-20 07:31:10.255335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.666 [2024-11-20 07:31:10.255342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.666 [2024-11-20 07:31:10.255348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.255361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-11-20 07:31:10.265302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.667 [2024-11-20 07:31:10.265350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.667 [2024-11-20 07:31:10.265363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.667 [2024-11-20 07:31:10.265370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.667 [2024-11-20 07:31:10.265376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.265389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-11-20 07:31:10.275349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.667 [2024-11-20 07:31:10.275395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.667 [2024-11-20 07:31:10.275408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.667 [2024-11-20 07:31:10.275414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.667 [2024-11-20 07:31:10.275420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.275433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-11-20 07:31:10.285228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.667 [2024-11-20 07:31:10.285274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.667 [2024-11-20 07:31:10.285291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.667 [2024-11-20 07:31:10.285298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.667 [2024-11-20 07:31:10.285304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.285318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-11-20 07:31:10.295393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.667 [2024-11-20 07:31:10.295436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.667 [2024-11-20 07:31:10.295449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.667 [2024-11-20 07:31:10.295456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.667 [2024-11-20 07:31:10.295462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.295475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-11-20 07:31:10.305423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.667 [2024-11-20 07:31:10.305472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.667 [2024-11-20 07:31:10.305485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.667 [2024-11-20 07:31:10.305492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.667 [2024-11-20 07:31:10.305498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.305511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-11-20 07:31:10.315448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.667 [2024-11-20 07:31:10.315500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.667 [2024-11-20 07:31:10.315513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.667 [2024-11-20 07:31:10.315520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.667 [2024-11-20 07:31:10.315526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.315539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-11-20 07:31:10.325437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.667 [2024-11-20 07:31:10.325478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.667 [2024-11-20 07:31:10.325491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.667 [2024-11-20 07:31:10.325498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.667 [2024-11-20 07:31:10.325507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.325521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-11-20 07:31:10.335534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.667 [2024-11-20 07:31:10.335581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.667 [2024-11-20 07:31:10.335594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.667 [2024-11-20 07:31:10.335601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.667 [2024-11-20 07:31:10.335607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.335620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-11-20 07:31:10.345528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.667 [2024-11-20 07:31:10.345581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.667 [2024-11-20 07:31:10.345606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.667 [2024-11-20 07:31:10.345615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.667 [2024-11-20 07:31:10.345622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.345641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-11-20 07:31:10.355555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.667 [2024-11-20 07:31:10.355611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.667 [2024-11-20 07:31:10.355636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.667 [2024-11-20 07:31:10.355645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.667 [2024-11-20 07:31:10.355652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.355671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-11-20 07:31:10.365430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.667 [2024-11-20 07:31:10.365471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.667 [2024-11-20 07:31:10.365487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.667 [2024-11-20 07:31:10.365494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.667 [2024-11-20 07:31:10.365500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.365515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-11-20 07:31:10.375554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.667 [2024-11-20 07:31:10.375597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.667 [2024-11-20 07:31:10.375611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.667 [2024-11-20 07:31:10.375619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.667 [2024-11-20 07:31:10.375625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.375639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-11-20 07:31:10.385611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.667 [2024-11-20 07:31:10.385671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.667 [2024-11-20 07:31:10.385695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.667 [2024-11-20 07:31:10.385704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.667 [2024-11-20 07:31:10.385711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.667 [2024-11-20 07:31:10.385731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-11-20 07:31:10.395661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.668 [2024-11-20 07:31:10.395713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.668 [2024-11-20 07:31:10.395728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.668 [2024-11-20 07:31:10.395735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.668 [2024-11-20 07:31:10.395741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.668 [2024-11-20 07:31:10.395756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-11-20 07:31:10.405663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.668 [2024-11-20 07:31:10.405717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.668 [2024-11-20 07:31:10.405731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.668 [2024-11-20 07:31:10.405738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.668 [2024-11-20 07:31:10.405744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.668 [2024-11-20 07:31:10.405758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-11-20 07:31:10.415685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.668 [2024-11-20 07:31:10.415732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.668 [2024-11-20 07:31:10.415750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.668 [2024-11-20 07:31:10.415757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.668 [2024-11-20 07:31:10.415764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.668 [2024-11-20 07:31:10.415778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-11-20 07:31:10.425597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.668 [2024-11-20 07:31:10.425667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.668 [2024-11-20 07:31:10.425681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.668 [2024-11-20 07:31:10.425690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.668 [2024-11-20 07:31:10.425697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.668 [2024-11-20 07:31:10.425711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.931 [2024-11-20 07:31:10.435673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.931 [2024-11-20 07:31:10.435736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.931 [2024-11-20 07:31:10.435749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.931 [2024-11-20 07:31:10.435757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.931 [2024-11-20 07:31:10.435763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.931 [2024-11-20 07:31:10.435777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.931 qpair failed and we were unable to recover it. 00:30:35.931 [2024-11-20 07:31:10.445755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.931 [2024-11-20 07:31:10.445796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.931 [2024-11-20 07:31:10.445810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.931 [2024-11-20 07:31:10.445817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.931 [2024-11-20 07:31:10.445823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.931 [2024-11-20 07:31:10.445837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.931 qpair failed and we were unable to recover it. 00:30:35.931 [2024-11-20 07:31:10.455789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.931 [2024-11-20 07:31:10.455843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.931 [2024-11-20 07:31:10.455857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.931 [2024-11-20 07:31:10.455868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.931 [2024-11-20 07:31:10.455879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.931 [2024-11-20 07:31:10.455893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.931 qpair failed and we were unable to recover it. 00:30:35.931 [2024-11-20 07:31:10.465788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.931 [2024-11-20 07:31:10.465870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.931 [2024-11-20 07:31:10.465884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.931 [2024-11-20 07:31:10.465891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.931 [2024-11-20 07:31:10.465898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.931 [2024-11-20 07:31:10.465912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.931 qpair failed and we were unable to recover it. 00:30:35.931 [2024-11-20 07:31:10.475883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.931 [2024-11-20 07:31:10.475936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.931 [2024-11-20 07:31:10.475949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.931 [2024-11-20 07:31:10.475956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.931 [2024-11-20 07:31:10.475962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.931 [2024-11-20 07:31:10.475976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.931 qpair failed and we were unable to recover it. 00:30:35.931 [2024-11-20 07:31:10.485853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.931 [2024-11-20 07:31:10.485899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.931 [2024-11-20 07:31:10.485912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.931 [2024-11-20 07:31:10.485919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.931 [2024-11-20 07:31:10.485925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.931 [2024-11-20 07:31:10.485938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.931 qpair failed and we were unable to recover it. 00:30:35.931 [2024-11-20 07:31:10.495891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.931 [2024-11-20 07:31:10.495944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.931 [2024-11-20 07:31:10.495957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.931 [2024-11-20 07:31:10.495964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.931 [2024-11-20 07:31:10.495970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.931 [2024-11-20 07:31:10.495984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.931 qpair failed and we were unable to recover it. 00:30:35.931 [2024-11-20 07:31:10.505935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.931 [2024-11-20 07:31:10.505991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.931 [2024-11-20 07:31:10.506005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.931 [2024-11-20 07:31:10.506012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.931 [2024-11-20 07:31:10.506018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.931 [2024-11-20 07:31:10.506033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.931 qpair failed and we were unable to recover it. 00:30:35.931 [2024-11-20 07:31:10.515848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.931 [2024-11-20 07:31:10.515899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.931 [2024-11-20 07:31:10.515913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.931 [2024-11-20 07:31:10.515920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.931 [2024-11-20 07:31:10.515926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.931 [2024-11-20 07:31:10.515940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.931 qpair failed and we were unable to recover it. 00:30:35.931 [2024-11-20 07:31:10.526008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.931 [2024-11-20 07:31:10.526089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.931 [2024-11-20 07:31:10.526103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.931 [2024-11-20 07:31:10.526110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.931 [2024-11-20 07:31:10.526116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.931 [2024-11-20 07:31:10.526130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.931 qpair failed and we were unable to recover it. 00:30:35.931 [2024-11-20 07:31:10.535993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.931 [2024-11-20 07:31:10.536036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.931 [2024-11-20 07:31:10.536050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.931 [2024-11-20 07:31:10.536057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.931 [2024-11-20 07:31:10.536063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.931 [2024-11-20 07:31:10.536077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.931 qpair failed and we were unable to recover it. 00:30:35.931 [2024-11-20 07:31:10.546052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.931 [2024-11-20 07:31:10.546098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.931 [2024-11-20 07:31:10.546114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.931 [2024-11-20 07:31:10.546121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.931 [2024-11-20 07:31:10.546127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.931 [2024-11-20 07:31:10.546141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.931 qpair failed and we were unable to recover it. 00:30:35.931 [2024-11-20 07:31:10.556101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.931 [2024-11-20 07:31:10.556193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.931 [2024-11-20 07:31:10.556206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.931 [2024-11-20 07:31:10.556213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.556219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.932 [2024-11-20 07:31:10.556233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.932 qpair failed and we were unable to recover it. 00:30:35.932 [2024-11-20 07:31:10.566102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.932 [2024-11-20 07:31:10.566151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.932 [2024-11-20 07:31:10.566165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.932 [2024-11-20 07:31:10.566172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.566178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.932 [2024-11-20 07:31:10.566191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.932 qpair failed and we were unable to recover it. 00:30:35.932 [2024-11-20 07:31:10.575994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.932 [2024-11-20 07:31:10.576039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.932 [2024-11-20 07:31:10.576053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.932 [2024-11-20 07:31:10.576060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.576066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.932 [2024-11-20 07:31:10.576079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.932 qpair failed and we were unable to recover it. 00:30:35.932 [2024-11-20 07:31:10.586157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.932 [2024-11-20 07:31:10.586205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.932 [2024-11-20 07:31:10.586217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.932 [2024-11-20 07:31:10.586224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.586234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.932 [2024-11-20 07:31:10.586247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.932 qpair failed and we were unable to recover it. 00:30:35.932 [2024-11-20 07:31:10.596061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.932 [2024-11-20 07:31:10.596113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.932 [2024-11-20 07:31:10.596127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.932 [2024-11-20 07:31:10.596133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.596140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.932 [2024-11-20 07:31:10.596153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.932 qpair failed and we were unable to recover it. 00:30:35.932 [2024-11-20 07:31:10.606119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.932 [2024-11-20 07:31:10.606182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.932 [2024-11-20 07:31:10.606195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.932 [2024-11-20 07:31:10.606202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.606208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.932 [2024-11-20 07:31:10.606222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.932 qpair failed and we were unable to recover it. 00:30:35.932 [2024-11-20 07:31:10.616230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.932 [2024-11-20 07:31:10.616279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.932 [2024-11-20 07:31:10.616292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.932 [2024-11-20 07:31:10.616299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.616305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.932 [2024-11-20 07:31:10.616318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.932 qpair failed and we were unable to recover it. 00:30:35.932 [2024-11-20 07:31:10.626281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.932 [2024-11-20 07:31:10.626334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.932 [2024-11-20 07:31:10.626349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.932 [2024-11-20 07:31:10.626356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.626362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.932 [2024-11-20 07:31:10.626376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.932 qpair failed and we were unable to recover it. 00:30:35.932 [2024-11-20 07:31:10.636308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.932 [2024-11-20 07:31:10.636355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.932 [2024-11-20 07:31:10.636368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.932 [2024-11-20 07:31:10.636375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.636382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.932 [2024-11-20 07:31:10.636395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.932 qpair failed and we were unable to recover it. 00:30:35.932 [2024-11-20 07:31:10.646195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.932 [2024-11-20 07:31:10.646245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.932 [2024-11-20 07:31:10.646259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.932 [2024-11-20 07:31:10.646266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.646272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.932 [2024-11-20 07:31:10.646286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.932 qpair failed and we were unable to recover it. 00:30:35.932 [2024-11-20 07:31:10.656352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.932 [2024-11-20 07:31:10.656398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.932 [2024-11-20 07:31:10.656411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.932 [2024-11-20 07:31:10.656418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.656425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.932 [2024-11-20 07:31:10.656438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.932 qpair failed and we were unable to recover it. 00:30:35.932 [2024-11-20 07:31:10.666393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.932 [2024-11-20 07:31:10.666489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.932 [2024-11-20 07:31:10.666503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.932 [2024-11-20 07:31:10.666510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.666516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.932 [2024-11-20 07:31:10.666529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.932 qpair failed and we were unable to recover it. 00:30:35.932 [2024-11-20 07:31:10.676419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.932 [2024-11-20 07:31:10.676469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.932 [2024-11-20 07:31:10.676490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.932 [2024-11-20 07:31:10.676497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.676503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.932 [2024-11-20 07:31:10.676517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.932 qpair failed and we were unable to recover it. 00:30:35.932 [2024-11-20 07:31:10.686439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.932 [2024-11-20 07:31:10.686487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.932 [2024-11-20 07:31:10.686502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.932 [2024-11-20 07:31:10.686509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.932 [2024-11-20 07:31:10.686515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:35.933 [2024-11-20 07:31:10.686529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.933 qpair failed and we were unable to recover it. 00:30:36.194 [2024-11-20 07:31:10.696464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.194 [2024-11-20 07:31:10.696515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.194 [2024-11-20 07:31:10.696529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.194 [2024-11-20 07:31:10.696536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.194 [2024-11-20 07:31:10.696542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.194 [2024-11-20 07:31:10.696556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.194 qpair failed and we were unable to recover it. 00:30:36.194 [2024-11-20 07:31:10.706478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.194 [2024-11-20 07:31:10.706576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.194 [2024-11-20 07:31:10.706590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.194 [2024-11-20 07:31:10.706598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.194 [2024-11-20 07:31:10.706604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.194 [2024-11-20 07:31:10.706618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.194 qpair failed and we were unable to recover it. 00:30:36.194 [2024-11-20 07:31:10.716502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.194 [2024-11-20 07:31:10.716555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.194 [2024-11-20 07:31:10.716580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.194 [2024-11-20 07:31:10.716594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.194 [2024-11-20 07:31:10.716601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.194 [2024-11-20 07:31:10.716621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.194 qpair failed and we were unable to recover it. 00:30:36.194 [2024-11-20 07:31:10.726535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.194 [2024-11-20 07:31:10.726624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.195 [2024-11-20 07:31:10.726648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.195 [2024-11-20 07:31:10.726657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.195 [2024-11-20 07:31:10.726664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.195 [2024-11-20 07:31:10.726683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.195 qpair failed and we were unable to recover it. 00:30:36.195 [2024-11-20 07:31:10.736563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.195 [2024-11-20 07:31:10.736626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.195 [2024-11-20 07:31:10.736642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.195 [2024-11-20 07:31:10.736649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.195 [2024-11-20 07:31:10.736656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.195 [2024-11-20 07:31:10.736671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.195 qpair failed and we were unable to recover it. 00:30:36.195 [2024-11-20 07:31:10.746605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.195 [2024-11-20 07:31:10.746700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.195 [2024-11-20 07:31:10.746714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.195 [2024-11-20 07:31:10.746721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.195 [2024-11-20 07:31:10.746727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.195 [2024-11-20 07:31:10.746742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.195 qpair failed and we were unable to recover it. 00:30:36.195 [2024-11-20 07:31:10.756507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.195 [2024-11-20 07:31:10.756556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.195 [2024-11-20 07:31:10.756569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.195 [2024-11-20 07:31:10.756576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.195 [2024-11-20 07:31:10.756582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.195 [2024-11-20 07:31:10.756596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.195 qpair failed and we were unable to recover it. 00:30:36.195 [2024-11-20 07:31:10.766667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.195 [2024-11-20 07:31:10.766727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.195 [2024-11-20 07:31:10.766742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.195 [2024-11-20 07:31:10.766749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.195 [2024-11-20 07:31:10.766755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.195 [2024-11-20 07:31:10.766769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.195 qpair failed and we were unable to recover it. 00:30:36.195 [2024-11-20 07:31:10.776666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.195 [2024-11-20 07:31:10.776717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.195 [2024-11-20 07:31:10.776731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.195 [2024-11-20 07:31:10.776738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.195 [2024-11-20 07:31:10.776744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.195 [2024-11-20 07:31:10.776758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.195 qpair failed and we were unable to recover it. 00:30:36.195 [2024-11-20 07:31:10.786699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.195 [2024-11-20 07:31:10.786746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.195 [2024-11-20 07:31:10.786760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.195 [2024-11-20 07:31:10.786767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.195 [2024-11-20 07:31:10.786773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.195 [2024-11-20 07:31:10.786787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.195 qpair failed and we were unable to recover it. 00:30:36.195 [2024-11-20 07:31:10.796737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.195 [2024-11-20 07:31:10.796791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.195 [2024-11-20 07:31:10.796805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.195 [2024-11-20 07:31:10.796811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.195 [2024-11-20 07:31:10.796818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.195 [2024-11-20 07:31:10.796831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.195 qpair failed and we were unable to recover it. 00:30:36.195 [2024-11-20 07:31:10.806768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.195 [2024-11-20 07:31:10.806827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.195 [2024-11-20 07:31:10.806844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.195 [2024-11-20 07:31:10.806851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.195 [2024-11-20 07:31:10.806857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.195 [2024-11-20 07:31:10.806876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.195 qpair failed and we were unable to recover it. 00:30:36.195 [2024-11-20 07:31:10.816755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.195 [2024-11-20 07:31:10.816845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.195 [2024-11-20 07:31:10.816861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.195 [2024-11-20 07:31:10.816873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.195 [2024-11-20 07:31:10.816880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.195 [2024-11-20 07:31:10.816894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.195 qpair failed and we were unable to recover it. 00:30:36.195 [2024-11-20 07:31:10.826804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.195 [2024-11-20 07:31:10.826852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.195 [2024-11-20 07:31:10.826872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.195 [2024-11-20 07:31:10.826879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.195 [2024-11-20 07:31:10.826885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.195 [2024-11-20 07:31:10.826899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.195 qpair failed and we were unable to recover it. 00:30:36.195 [2024-11-20 07:31:10.836858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.195 [2024-11-20 07:31:10.836912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.195 [2024-11-20 07:31:10.836926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.195 [2024-11-20 07:31:10.836932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.195 [2024-11-20 07:31:10.836939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.195 [2024-11-20 07:31:10.836952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.195 qpair failed and we were unable to recover it. 00:30:36.195 [2024-11-20 07:31:10.846869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.195 [2024-11-20 07:31:10.846962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.195 [2024-11-20 07:31:10.846975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.195 [2024-11-20 07:31:10.846986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.195 [2024-11-20 07:31:10.846992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.195 [2024-11-20 07:31:10.847006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.195 qpair failed and we were unable to recover it. 00:30:36.195 [2024-11-20 07:31:10.856867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.196 [2024-11-20 07:31:10.856914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.196 [2024-11-20 07:31:10.856928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.196 [2024-11-20 07:31:10.856935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.196 [2024-11-20 07:31:10.856941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.196 [2024-11-20 07:31:10.856955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.196 qpair failed and we were unable to recover it. 00:30:36.196 [2024-11-20 07:31:10.866978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.196 [2024-11-20 07:31:10.867026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.196 [2024-11-20 07:31:10.867039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.196 [2024-11-20 07:31:10.867046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.196 [2024-11-20 07:31:10.867052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.196 [2024-11-20 07:31:10.867066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.196 qpair failed and we were unable to recover it. 00:30:36.196 [2024-11-20 07:31:10.876926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.196 [2024-11-20 07:31:10.876974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.196 [2024-11-20 07:31:10.876987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.196 [2024-11-20 07:31:10.876994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.196 [2024-11-20 07:31:10.877000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.196 [2024-11-20 07:31:10.877014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.196 qpair failed and we were unable to recover it. 00:30:36.196 [2024-11-20 07:31:10.886966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.196 [2024-11-20 07:31:10.887009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.196 [2024-11-20 07:31:10.887022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.196 [2024-11-20 07:31:10.887029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.196 [2024-11-20 07:31:10.887035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.196 [2024-11-20 07:31:10.887049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.196 qpair failed and we were unable to recover it. 00:30:36.196 [2024-11-20 07:31:10.897011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.196 [2024-11-20 07:31:10.897056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.196 [2024-11-20 07:31:10.897070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.196 [2024-11-20 07:31:10.897077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.196 [2024-11-20 07:31:10.897083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.196 [2024-11-20 07:31:10.897097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.196 qpair failed and we were unable to recover it. 00:30:36.196 [2024-11-20 07:31:10.906994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.196 [2024-11-20 07:31:10.907044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.196 [2024-11-20 07:31:10.907057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.196 [2024-11-20 07:31:10.907064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.196 [2024-11-20 07:31:10.907070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.196 [2024-11-20 07:31:10.907084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.196 qpair failed and we were unable to recover it. 00:30:36.196 [2024-11-20 07:31:10.917040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.196 [2024-11-20 07:31:10.917086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.196 [2024-11-20 07:31:10.917099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.196 [2024-11-20 07:31:10.917107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.196 [2024-11-20 07:31:10.917113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.196 [2024-11-20 07:31:10.917126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.196 qpair failed and we were unable to recover it. 00:30:36.196 [2024-11-20 07:31:10.927094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.196 [2024-11-20 07:31:10.927162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.196 [2024-11-20 07:31:10.927175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.196 [2024-11-20 07:31:10.927182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.196 [2024-11-20 07:31:10.927189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.196 [2024-11-20 07:31:10.927202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.196 qpair failed and we were unable to recover it. 00:30:36.196 [2024-11-20 07:31:10.937103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.196 [2024-11-20 07:31:10.937197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.196 [2024-11-20 07:31:10.937214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.196 [2024-11-20 07:31:10.937221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.196 [2024-11-20 07:31:10.937228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.196 [2024-11-20 07:31:10.937241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.196 qpair failed and we were unable to recover it. 00:30:36.196 [2024-11-20 07:31:10.947122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.196 [2024-11-20 07:31:10.947184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.196 [2024-11-20 07:31:10.947197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.196 [2024-11-20 07:31:10.947204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.196 [2024-11-20 07:31:10.947210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.196 [2024-11-20 07:31:10.947224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.196 qpair failed and we were unable to recover it. 00:30:36.196 [2024-11-20 07:31:10.957171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.196 [2024-11-20 07:31:10.957221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.196 [2024-11-20 07:31:10.957234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.196 [2024-11-20 07:31:10.957241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.196 [2024-11-20 07:31:10.957247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.196 [2024-11-20 07:31:10.957260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.196 qpair failed and we were unable to recover it. 00:30:36.460 [2024-11-20 07:31:10.967083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.460 [2024-11-20 07:31:10.967142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.460 [2024-11-20 07:31:10.967155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.460 [2024-11-20 07:31:10.967163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.460 [2024-11-20 07:31:10.967169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.460 [2024-11-20 07:31:10.967182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.460 qpair failed and we were unable to recover it. 00:30:36.460 [2024-11-20 07:31:10.977265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.460 [2024-11-20 07:31:10.977313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.460 [2024-11-20 07:31:10.977327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.460 [2024-11-20 07:31:10.977337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.460 [2024-11-20 07:31:10.977343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.460 [2024-11-20 07:31:10.977357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.460 qpair failed and we were unable to recover it. 00:30:36.460 [2024-11-20 07:31:10.987236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.460 [2024-11-20 07:31:10.987290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.460 [2024-11-20 07:31:10.987303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.460 [2024-11-20 07:31:10.987310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.460 [2024-11-20 07:31:10.987316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.460 [2024-11-20 07:31:10.987330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.460 qpair failed and we were unable to recover it. 00:30:36.460 [2024-11-20 07:31:10.997289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.460 [2024-11-20 07:31:10.997336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.460 [2024-11-20 07:31:10.997349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.460 [2024-11-20 07:31:10.997355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.460 [2024-11-20 07:31:10.997362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.460 [2024-11-20 07:31:10.997375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.460 qpair failed and we were unable to recover it. 00:30:36.460 [2024-11-20 07:31:11.007205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.460 [2024-11-20 07:31:11.007254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.460 [2024-11-20 07:31:11.007267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.461 [2024-11-20 07:31:11.007274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.461 [2024-11-20 07:31:11.007281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.461 [2024-11-20 07:31:11.007294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.461 qpair failed and we were unable to recover it. 00:30:36.461 [2024-11-20 07:31:11.017290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.461 [2024-11-20 07:31:11.017337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.461 [2024-11-20 07:31:11.017351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.461 [2024-11-20 07:31:11.017357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.461 [2024-11-20 07:31:11.017364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.461 [2024-11-20 07:31:11.017377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.461 qpair failed and we were unable to recover it. 00:30:36.461 [2024-11-20 07:31:11.027380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.461 [2024-11-20 07:31:11.027426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.461 [2024-11-20 07:31:11.027439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.461 [2024-11-20 07:31:11.027446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.461 [2024-11-20 07:31:11.027452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.461 [2024-11-20 07:31:11.027466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.461 qpair failed and we were unable to recover it. 00:30:36.461 [2024-11-20 07:31:11.037251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.461 [2024-11-20 07:31:11.037296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.461 [2024-11-20 07:31:11.037310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.461 [2024-11-20 07:31:11.037317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.461 [2024-11-20 07:31:11.037323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.461 [2024-11-20 07:31:11.037337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.461 qpair failed and we were unable to recover it. 00:30:36.461 [2024-11-20 07:31:11.047407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.461 [2024-11-20 07:31:11.047459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.461 [2024-11-20 07:31:11.047472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.461 [2024-11-20 07:31:11.047479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.461 [2024-11-20 07:31:11.047486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.461 [2024-11-20 07:31:11.047500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.461 qpair failed and we were unable to recover it. 00:30:36.461 [2024-11-20 07:31:11.057435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.461 [2024-11-20 07:31:11.057479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.461 [2024-11-20 07:31:11.057492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.461 [2024-11-20 07:31:11.057499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.461 [2024-11-20 07:31:11.057505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.461 [2024-11-20 07:31:11.057519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.461 qpair failed and we were unable to recover it. 00:30:36.461 [2024-11-20 07:31:11.067448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.461 [2024-11-20 07:31:11.067494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.461 [2024-11-20 07:31:11.067511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.461 [2024-11-20 07:31:11.067518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.461 [2024-11-20 07:31:11.067525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.461 [2024-11-20 07:31:11.067538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.461 qpair failed and we were unable to recover it. 00:30:36.461 [2024-11-20 07:31:11.077477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.461 [2024-11-20 07:31:11.077533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.461 [2024-11-20 07:31:11.077547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.461 [2024-11-20 07:31:11.077554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.461 [2024-11-20 07:31:11.077560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.461 [2024-11-20 07:31:11.077574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.461 qpair failed and we were unable to recover it. 00:30:36.461 [2024-11-20 07:31:11.087514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.461 [2024-11-20 07:31:11.087561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.461 [2024-11-20 07:31:11.087574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.461 [2024-11-20 07:31:11.087580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.461 [2024-11-20 07:31:11.087587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.461 [2024-11-20 07:31:11.087600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.461 qpair failed and we were unable to recover it. 00:30:36.461 [2024-11-20 07:31:11.097530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.461 [2024-11-20 07:31:11.097573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.461 [2024-11-20 07:31:11.097586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.461 [2024-11-20 07:31:11.097593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.461 [2024-11-20 07:31:11.097599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.461 [2024-11-20 07:31:11.097613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.461 qpair failed and we were unable to recover it. 00:30:36.461 [2024-11-20 07:31:11.107579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.461 [2024-11-20 07:31:11.107640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.461 [2024-11-20 07:31:11.107654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.461 [2024-11-20 07:31:11.107664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.461 [2024-11-20 07:31:11.107670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.461 [2024-11-20 07:31:11.107684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.461 qpair failed and we were unable to recover it. 00:30:36.461 [2024-11-20 07:31:11.117478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.461 [2024-11-20 07:31:11.117536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.461 [2024-11-20 07:31:11.117550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.461 [2024-11-20 07:31:11.117557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.461 [2024-11-20 07:31:11.117563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.461 [2024-11-20 07:31:11.117576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.461 qpair failed and we were unable to recover it. 00:30:36.462 [2024-11-20 07:31:11.127607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.462 [2024-11-20 07:31:11.127657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.462 [2024-11-20 07:31:11.127682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.462 [2024-11-20 07:31:11.127691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.462 [2024-11-20 07:31:11.127698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.462 [2024-11-20 07:31:11.127717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.462 qpair failed and we were unable to recover it. 00:30:36.462 [2024-11-20 07:31:11.137652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.462 [2024-11-20 07:31:11.137695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.462 [2024-11-20 07:31:11.137710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.462 [2024-11-20 07:31:11.137718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.462 [2024-11-20 07:31:11.137724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.462 [2024-11-20 07:31:11.137739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.462 qpair failed and we were unable to recover it. 00:30:36.462 [2024-11-20 07:31:11.147687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.462 [2024-11-20 07:31:11.147735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.462 [2024-11-20 07:31:11.147750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.462 [2024-11-20 07:31:11.147757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.462 [2024-11-20 07:31:11.147763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.462 [2024-11-20 07:31:11.147777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.462 qpair failed and we were unable to recover it. 00:30:36.462 [2024-11-20 07:31:11.157582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.462 [2024-11-20 07:31:11.157638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.462 [2024-11-20 07:31:11.157652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.462 [2024-11-20 07:31:11.157658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.462 [2024-11-20 07:31:11.157665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.462 [2024-11-20 07:31:11.157679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.462 qpair failed and we were unable to recover it. 00:30:36.462 [2024-11-20 07:31:11.167731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.462 [2024-11-20 07:31:11.167778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.462 [2024-11-20 07:31:11.167792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.462 [2024-11-20 07:31:11.167799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.462 [2024-11-20 07:31:11.167806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.462 [2024-11-20 07:31:11.167820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.462 qpair failed and we were unable to recover it. 00:30:36.462 [2024-11-20 07:31:11.177651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.462 [2024-11-20 07:31:11.177700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.462 [2024-11-20 07:31:11.177715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.462 [2024-11-20 07:31:11.177722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.462 [2024-11-20 07:31:11.177729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.462 [2024-11-20 07:31:11.177743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.462 qpair failed and we were unable to recover it. 00:30:36.462 [2024-11-20 07:31:11.187785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.462 [2024-11-20 07:31:11.187837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.462 [2024-11-20 07:31:11.187852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.462 [2024-11-20 07:31:11.187859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.462 [2024-11-20 07:31:11.187872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.462 [2024-11-20 07:31:11.187886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.462 qpair failed and we were unable to recover it. 00:30:36.462 [2024-11-20 07:31:11.197795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.462 [2024-11-20 07:31:11.197878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.462 [2024-11-20 07:31:11.197892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.462 [2024-11-20 07:31:11.197899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.462 [2024-11-20 07:31:11.197905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.462 [2024-11-20 07:31:11.197919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.462 qpair failed and we were unable to recover it. 00:30:36.462 [2024-11-20 07:31:11.207832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.462 [2024-11-20 07:31:11.207920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.462 [2024-11-20 07:31:11.207933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.462 [2024-11-20 07:31:11.207940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.462 [2024-11-20 07:31:11.207947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.462 [2024-11-20 07:31:11.207961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.462 qpair failed and we were unable to recover it. 00:30:36.462 [2024-11-20 07:31:11.217819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.462 [2024-11-20 07:31:11.217878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.462 [2024-11-20 07:31:11.217892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.462 [2024-11-20 07:31:11.217900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.462 [2024-11-20 07:31:11.217906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.462 [2024-11-20 07:31:11.217920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.462 qpair failed and we were unable to recover it. 00:30:36.725 [2024-11-20 07:31:11.227892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.725 [2024-11-20 07:31:11.227942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.725 [2024-11-20 07:31:11.227955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.725 [2024-11-20 07:31:11.227962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.725 [2024-11-20 07:31:11.227968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.725 [2024-11-20 07:31:11.227982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.725 qpair failed and we were unable to recover it. 00:30:36.725 [2024-11-20 07:31:11.237901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.725 [2024-11-20 07:31:11.237949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.725 [2024-11-20 07:31:11.237963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.725 [2024-11-20 07:31:11.237974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.725 [2024-11-20 07:31:11.237980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.725 [2024-11-20 07:31:11.237994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.725 qpair failed and we were unable to recover it. 00:30:36.725 [2024-11-20 07:31:11.247944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.725 [2024-11-20 07:31:11.247992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.725 [2024-11-20 07:31:11.248006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.725 [2024-11-20 07:31:11.248013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.725 [2024-11-20 07:31:11.248019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.725 [2024-11-20 07:31:11.248032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.725 qpair failed and we were unable to recover it. 00:30:36.725 [2024-11-20 07:31:11.257954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.725 [2024-11-20 07:31:11.258011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.725 [2024-11-20 07:31:11.258024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.725 [2024-11-20 07:31:11.258031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.725 [2024-11-20 07:31:11.258037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.725 [2024-11-20 07:31:11.258051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.725 qpair failed and we were unable to recover it. 00:30:36.725 [2024-11-20 07:31:11.268003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.725 [2024-11-20 07:31:11.268063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.725 [2024-11-20 07:31:11.268076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.725 [2024-11-20 07:31:11.268083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.725 [2024-11-20 07:31:11.268089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.725 [2024-11-20 07:31:11.268103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.725 qpair failed and we were unable to recover it. 00:30:36.725 [2024-11-20 07:31:11.278034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.725 [2024-11-20 07:31:11.278079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.725 [2024-11-20 07:31:11.278092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.725 [2024-11-20 07:31:11.278099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.725 [2024-11-20 07:31:11.278106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.725 [2024-11-20 07:31:11.278119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.725 qpair failed and we were unable to recover it. 00:30:36.725 [2024-11-20 07:31:11.288020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.725 [2024-11-20 07:31:11.288062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.725 [2024-11-20 07:31:11.288076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.725 [2024-11-20 07:31:11.288082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.725 [2024-11-20 07:31:11.288089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.725 [2024-11-20 07:31:11.288102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.725 qpair failed and we were unable to recover it. 00:30:36.725 [2024-11-20 07:31:11.297979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.725 [2024-11-20 07:31:11.298025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.725 [2024-11-20 07:31:11.298038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.725 [2024-11-20 07:31:11.298045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.725 [2024-11-20 07:31:11.298051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.725 [2024-11-20 07:31:11.298064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.725 qpair failed and we were unable to recover it. 00:30:36.725 [2024-11-20 07:31:11.308094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.725 [2024-11-20 07:31:11.308140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.725 [2024-11-20 07:31:11.308154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.725 [2024-11-20 07:31:11.308161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.725 [2024-11-20 07:31:11.308167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.725 [2024-11-20 07:31:11.308180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.725 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.318153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.726 [2024-11-20 07:31:11.318199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.726 [2024-11-20 07:31:11.318213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.726 [2024-11-20 07:31:11.318220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.726 [2024-11-20 07:31:11.318226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.726 [2024-11-20 07:31:11.318239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.726 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.328171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.726 [2024-11-20 07:31:11.328220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.726 [2024-11-20 07:31:11.328234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.726 [2024-11-20 07:31:11.328241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.726 [2024-11-20 07:31:11.328247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.726 [2024-11-20 07:31:11.328261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.726 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.338202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.726 [2024-11-20 07:31:11.338284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.726 [2024-11-20 07:31:11.338297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.726 [2024-11-20 07:31:11.338304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.726 [2024-11-20 07:31:11.338310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.726 [2024-11-20 07:31:11.338324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.726 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.348229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.726 [2024-11-20 07:31:11.348275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.726 [2024-11-20 07:31:11.348288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.726 [2024-11-20 07:31:11.348295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.726 [2024-11-20 07:31:11.348301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.726 [2024-11-20 07:31:11.348315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.726 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.358206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.726 [2024-11-20 07:31:11.358253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.726 [2024-11-20 07:31:11.358267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.726 [2024-11-20 07:31:11.358274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.726 [2024-11-20 07:31:11.358280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.726 [2024-11-20 07:31:11.358293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.726 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.368255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.726 [2024-11-20 07:31:11.368312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.726 [2024-11-20 07:31:11.368326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.726 [2024-11-20 07:31:11.368336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.726 [2024-11-20 07:31:11.368342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.726 [2024-11-20 07:31:11.368355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.726 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.378162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.726 [2024-11-20 07:31:11.378209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.726 [2024-11-20 07:31:11.378223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.726 [2024-11-20 07:31:11.378229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.726 [2024-11-20 07:31:11.378236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.726 [2024-11-20 07:31:11.378250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.726 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.388325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.726 [2024-11-20 07:31:11.388371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.726 [2024-11-20 07:31:11.388385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.726 [2024-11-20 07:31:11.388391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.726 [2024-11-20 07:31:11.388397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.726 [2024-11-20 07:31:11.388411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.726 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.398357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.726 [2024-11-20 07:31:11.398432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.726 [2024-11-20 07:31:11.398445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.726 [2024-11-20 07:31:11.398453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.726 [2024-11-20 07:31:11.398459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.726 [2024-11-20 07:31:11.398472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.726 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.408366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.726 [2024-11-20 07:31:11.408423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.726 [2024-11-20 07:31:11.408436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.726 [2024-11-20 07:31:11.408443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.726 [2024-11-20 07:31:11.408449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.726 [2024-11-20 07:31:11.408470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.726 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.418398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.726 [2024-11-20 07:31:11.418450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.726 [2024-11-20 07:31:11.418464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.726 [2024-11-20 07:31:11.418470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.726 [2024-11-20 07:31:11.418477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.726 [2024-11-20 07:31:11.418490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.726 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.428396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.726 [2024-11-20 07:31:11.428442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.726 [2024-11-20 07:31:11.428455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.726 [2024-11-20 07:31:11.428462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.726 [2024-11-20 07:31:11.428469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.726 [2024-11-20 07:31:11.428482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.726 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.438324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.726 [2024-11-20 07:31:11.438405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.726 [2024-11-20 07:31:11.438418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.726 [2024-11-20 07:31:11.438425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.726 [2024-11-20 07:31:11.438431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.726 [2024-11-20 07:31:11.438444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.726 qpair failed and we were unable to recover it. 00:30:36.726 [2024-11-20 07:31:11.448472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.727 [2024-11-20 07:31:11.448516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.727 [2024-11-20 07:31:11.448531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.727 [2024-11-20 07:31:11.448538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.727 [2024-11-20 07:31:11.448544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.727 [2024-11-20 07:31:11.448557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.727 qpair failed and we were unable to recover it. 00:30:36.727 [2024-11-20 07:31:11.458499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.727 [2024-11-20 07:31:11.458545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.727 [2024-11-20 07:31:11.458560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.727 [2024-11-20 07:31:11.458567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.727 [2024-11-20 07:31:11.458573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.727 [2024-11-20 07:31:11.458586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.727 qpair failed and we were unable to recover it. 00:30:36.727 [2024-11-20 07:31:11.468495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.727 [2024-11-20 07:31:11.468550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.727 [2024-11-20 07:31:11.468575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.727 [2024-11-20 07:31:11.468585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.727 [2024-11-20 07:31:11.468594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.727 [2024-11-20 07:31:11.468614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.727 qpair failed and we were unable to recover it. 00:30:36.727 [2024-11-20 07:31:11.478569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.727 [2024-11-20 07:31:11.478668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.727 [2024-11-20 07:31:11.478686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.727 [2024-11-20 07:31:11.478695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.727 [2024-11-20 07:31:11.478704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.727 [2024-11-20 07:31:11.478721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.727 qpair failed and we were unable to recover it. 00:30:36.989 [2024-11-20 07:31:11.488573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.989 [2024-11-20 07:31:11.488625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.989 [2024-11-20 07:31:11.488650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.989 [2024-11-20 07:31:11.488659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.989 [2024-11-20 07:31:11.488666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.989 [2024-11-20 07:31:11.488686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.989 qpair failed and we were unable to recover it. 00:30:36.989 [2024-11-20 07:31:11.498609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.989 [2024-11-20 07:31:11.498663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.989 [2024-11-20 07:31:11.498688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.989 [2024-11-20 07:31:11.498702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.989 [2024-11-20 07:31:11.498708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.989 [2024-11-20 07:31:11.498729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.989 qpair failed and we were unable to recover it. 00:30:36.989 [2024-11-20 07:31:11.508520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.989 [2024-11-20 07:31:11.508581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.989 [2024-11-20 07:31:11.508596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.989 [2024-11-20 07:31:11.508603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.989 [2024-11-20 07:31:11.508610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.989 [2024-11-20 07:31:11.508625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.989 qpair failed and we were unable to recover it. 00:30:36.989 [2024-11-20 07:31:11.518681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.989 [2024-11-20 07:31:11.518733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.989 [2024-11-20 07:31:11.518747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.989 [2024-11-20 07:31:11.518754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.989 [2024-11-20 07:31:11.518760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.989 [2024-11-20 07:31:11.518774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.989 qpair failed and we were unable to recover it. 00:30:36.989 [2024-11-20 07:31:11.528555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.989 [2024-11-20 07:31:11.528601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.989 [2024-11-20 07:31:11.528615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.989 [2024-11-20 07:31:11.528622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.989 [2024-11-20 07:31:11.528628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.989 [2024-11-20 07:31:11.528642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.989 qpair failed and we were unable to recover it. 00:30:36.989 [2024-11-20 07:31:11.538723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.989 [2024-11-20 07:31:11.538793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.989 [2024-11-20 07:31:11.538806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.989 [2024-11-20 07:31:11.538813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.989 [2024-11-20 07:31:11.538820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.989 [2024-11-20 07:31:11.538837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.989 qpair failed and we were unable to recover it. 00:30:36.989 [2024-11-20 07:31:11.548747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.989 [2024-11-20 07:31:11.548811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.989 [2024-11-20 07:31:11.548825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.989 [2024-11-20 07:31:11.548832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.989 [2024-11-20 07:31:11.548839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.989 [2024-11-20 07:31:11.548853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.989 qpair failed and we were unable to recover it. 00:30:36.989 [2024-11-20 07:31:11.558786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.989 [2024-11-20 07:31:11.558836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.989 [2024-11-20 07:31:11.558850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.989 [2024-11-20 07:31:11.558857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.989 [2024-11-20 07:31:11.558868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.989 [2024-11-20 07:31:11.558883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.989 qpair failed and we were unable to recover it. 00:30:36.989 [2024-11-20 07:31:11.568801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.989 [2024-11-20 07:31:11.568848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.989 [2024-11-20 07:31:11.568865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.989 [2024-11-20 07:31:11.568873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.989 [2024-11-20 07:31:11.568879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.989 [2024-11-20 07:31:11.568893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.989 qpair failed and we were unable to recover it. 00:30:36.989 [2024-11-20 07:31:11.578698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.989 [2024-11-20 07:31:11.578748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.989 [2024-11-20 07:31:11.578762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.989 [2024-11-20 07:31:11.578769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.989 [2024-11-20 07:31:11.578775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.989 [2024-11-20 07:31:11.578789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.989 qpair failed and we were unable to recover it. 00:30:36.990 [2024-11-20 07:31:11.588869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.990 [2024-11-20 07:31:11.588966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.990 [2024-11-20 07:31:11.588980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.990 [2024-11-20 07:31:11.588987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.990 [2024-11-20 07:31:11.588993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.990 [2024-11-20 07:31:11.589007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.990 qpair failed and we were unable to recover it. 00:30:36.990 [2024-11-20 07:31:11.598774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.990 [2024-11-20 07:31:11.598824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.990 [2024-11-20 07:31:11.598839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.990 [2024-11-20 07:31:11.598846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.990 [2024-11-20 07:31:11.598852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.990 [2024-11-20 07:31:11.598871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.990 qpair failed and we were unable to recover it. 00:30:36.990 [2024-11-20 07:31:11.608909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.990 [2024-11-20 07:31:11.608952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.990 [2024-11-20 07:31:11.608966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.990 [2024-11-20 07:31:11.608973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.990 [2024-11-20 07:31:11.608979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.990 [2024-11-20 07:31:11.608993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.990 qpair failed and we were unable to recover it. 00:30:36.990 [2024-11-20 07:31:11.618939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.990 [2024-11-20 07:31:11.618983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.990 [2024-11-20 07:31:11.618996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.990 [2024-11-20 07:31:11.619003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.990 [2024-11-20 07:31:11.619009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.990 [2024-11-20 07:31:11.619023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.990 qpair failed and we were unable to recover it. 00:30:36.990 [2024-11-20 07:31:11.628956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.990 [2024-11-20 07:31:11.629011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.990 [2024-11-20 07:31:11.629023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.990 [2024-11-20 07:31:11.629034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.990 [2024-11-20 07:31:11.629040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.990 [2024-11-20 07:31:11.629054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.990 qpair failed and we were unable to recover it. 00:30:36.990 [2024-11-20 07:31:11.639000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.990 [2024-11-20 07:31:11.639050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.990 [2024-11-20 07:31:11.639064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.990 [2024-11-20 07:31:11.639071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.990 [2024-11-20 07:31:11.639077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.990 [2024-11-20 07:31:11.639091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.990 qpair failed and we were unable to recover it. 00:30:36.990 [2024-11-20 07:31:11.648884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.990 [2024-11-20 07:31:11.648931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.990 [2024-11-20 07:31:11.648945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.990 [2024-11-20 07:31:11.648952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.990 [2024-11-20 07:31:11.648958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.990 [2024-11-20 07:31:11.648972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.990 qpair failed and we were unable to recover it. 00:30:36.990 [2024-11-20 07:31:11.658995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.990 [2024-11-20 07:31:11.659039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.990 [2024-11-20 07:31:11.659052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.990 [2024-11-20 07:31:11.659059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.990 [2024-11-20 07:31:11.659065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.990 [2024-11-20 07:31:11.659079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.990 qpair failed and we were unable to recover it. 00:30:36.990 [2024-11-20 07:31:11.668971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.990 [2024-11-20 07:31:11.669025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.990 [2024-11-20 07:31:11.669038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.990 [2024-11-20 07:31:11.669045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.990 [2024-11-20 07:31:11.669051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.990 [2024-11-20 07:31:11.669068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.990 qpair failed and we were unable to recover it. 00:30:36.990 [2024-11-20 07:31:11.679102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.990 [2024-11-20 07:31:11.679201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.990 [2024-11-20 07:31:11.679215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.990 [2024-11-20 07:31:11.679222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.990 [2024-11-20 07:31:11.679228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.990 [2024-11-20 07:31:11.679242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.990 qpair failed and we were unable to recover it. 00:30:36.990 [2024-11-20 07:31:11.689123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.990 [2024-11-20 07:31:11.689169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.990 [2024-11-20 07:31:11.689183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.990 [2024-11-20 07:31:11.689189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.990 [2024-11-20 07:31:11.689195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.990 [2024-11-20 07:31:11.689209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.990 qpair failed and we were unable to recover it. 00:30:36.990 [2024-11-20 07:31:11.699136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.990 [2024-11-20 07:31:11.699180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.990 [2024-11-20 07:31:11.699194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.990 [2024-11-20 07:31:11.699201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.990 [2024-11-20 07:31:11.699207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.990 [2024-11-20 07:31:11.699220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.990 qpair failed and we were unable to recover it. 00:30:36.990 [2024-11-20 07:31:11.709081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.990 [2024-11-20 07:31:11.709152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.990 [2024-11-20 07:31:11.709167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.990 [2024-11-20 07:31:11.709174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.990 [2024-11-20 07:31:11.709180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.990 [2024-11-20 07:31:11.709194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.990 qpair failed and we were unable to recover it. 00:30:36.991 [2024-11-20 07:31:11.719228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.991 [2024-11-20 07:31:11.719279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.991 [2024-11-20 07:31:11.719293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.991 [2024-11-20 07:31:11.719301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.991 [2024-11-20 07:31:11.719308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.991 [2024-11-20 07:31:11.719322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.991 qpair failed and we were unable to recover it. 00:30:36.991 [2024-11-20 07:31:11.729243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.991 [2024-11-20 07:31:11.729289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.991 [2024-11-20 07:31:11.729302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.991 [2024-11-20 07:31:11.729309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.991 [2024-11-20 07:31:11.729316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.991 [2024-11-20 07:31:11.729329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.991 qpair failed and we were unable to recover it. 00:30:36.991 [2024-11-20 07:31:11.739286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.991 [2024-11-20 07:31:11.739327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.991 [2024-11-20 07:31:11.739341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.991 [2024-11-20 07:31:11.739348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.991 [2024-11-20 07:31:11.739354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.991 [2024-11-20 07:31:11.739367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.991 qpair failed and we were unable to recover it. 00:30:36.991 [2024-11-20 07:31:11.749280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.991 [2024-11-20 07:31:11.749332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.991 [2024-11-20 07:31:11.749347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.991 [2024-11-20 07:31:11.749354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.991 [2024-11-20 07:31:11.749361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:36.991 [2024-11-20 07:31:11.749377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.991 qpair failed and we were unable to recover it. 00:30:37.253 [2024-11-20 07:31:11.759301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.253 [2024-11-20 07:31:11.759354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.253 [2024-11-20 07:31:11.759369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.253 [2024-11-20 07:31:11.759379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.253 [2024-11-20 07:31:11.759385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e95490 00:30:37.253 [2024-11-20 07:31:11.759399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:37.253 qpair failed and we were unable to recover it. 00:30:37.253 [2024-11-20 07:31:11.769264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.253 [2024-11-20 07:31:11.769357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.253 [2024-11-20 07:31:11.769414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.253 [2024-11-20 07:31:11.769436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.253 [2024-11-20 07:31:11.769456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efe54000b90 00:30:37.253 [2024-11-20 07:31:11.769506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.253 qpair failed and we were unable to recover it. 00:30:37.253 [2024-11-20 07:31:11.779353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.253 [2024-11-20 07:31:11.779451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.253 [2024-11-20 07:31:11.779507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.253 [2024-11-20 07:31:11.779528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.253 [2024-11-20 07:31:11.779547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efe54000b90 00:30:37.253 [2024-11-20 07:31:11.779596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.253 qpair failed and we were unable to recover it. 00:30:37.253 [2024-11-20 07:31:11.779797] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:37.253 A controller has encountered a failure and is being reset. 00:30:37.253 [2024-11-20 07:31:11.779928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e92020 (9): Bad file descriptor 00:30:37.253 Controller properly reset. 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Write completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Write completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Write completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Write completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Write completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Write completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Write completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Write completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Write completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Write completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Write completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 Read completed with error (sct=0, sc=8) 00:30:37.253 starting I/O failed 00:30:37.253 [2024-11-20 07:31:11.799827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:37.253 Initializing NVMe Controllers 00:30:37.253 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:37.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:37.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:37.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:37.253 Initialization complete. Launching workers. 00:30:37.253 Starting thread on core 1 00:30:37.253 Starting thread on core 2 00:30:37.253 Starting thread on core 3 00:30:37.253 Starting thread on core 0 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:37.253 00:30:37.253 real 0m11.435s 00:30:37.253 user 0m21.993s 00:30:37.253 sys 0m3.677s 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:37.253 ************************************ 00:30:37.253 END TEST nvmf_target_disconnect_tc2 00:30:37.253 ************************************ 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.253 rmmod nvme_tcp 00:30:37.253 rmmod nvme_fabrics 00:30:37.253 rmmod nvme_keyring 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1488718 ']' 00:30:37.253 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1488718 00:30:37.254 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1488718 ']' 00:30:37.254 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 1488718 00:30:37.254 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:30:37.254 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:37.254 07:31:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1488718 00:30:37.254 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:30:37.254 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:30:37.254 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1488718' 00:30:37.254 killing process with pid 1488718 00:30:37.254 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 1488718 00:30:37.254 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 1488718 00:30:37.514 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.514 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.514 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.514 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:37.514 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:37.515 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.515 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.515 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.515 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.515 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.515 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.515 07:31:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.060 07:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:40.060 00:30:40.060 real 0m22.863s 00:30:40.060 user 0m50.328s 00:30:40.060 sys 0m10.570s 00:30:40.060 07:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:40.060 07:31:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:40.060 ************************************ 00:30:40.060 END TEST nvmf_target_disconnect 00:30:40.060 ************************************ 00:30:40.060 07:31:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:40.060 00:30:40.060 real 6m50.260s 00:30:40.060 user 11m33.412s 00:30:40.060 sys 2m24.630s 00:30:40.060 07:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:40.060 07:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.060 ************************************ 00:30:40.060 END TEST nvmf_host 00:30:40.060 ************************************ 00:30:40.060 07:31:14 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:40.060 07:31:14 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:40.060 07:31:14 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:40.060 07:31:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:40.060 07:31:14 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:40.060 07:31:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:40.060 ************************************ 00:30:40.060 START TEST nvmf_target_core_interrupt_mode 00:30:40.060 ************************************ 00:30:40.060 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:40.060 * Looking for test storage... 00:30:40.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:40.060 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:40.060 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:30:40.060 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:40.060 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:40.060 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:40.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.061 --rc genhtml_branch_coverage=1 00:30:40.061 --rc genhtml_function_coverage=1 00:30:40.061 --rc genhtml_legend=1 00:30:40.061 --rc geninfo_all_blocks=1 00:30:40.061 --rc geninfo_unexecuted_blocks=1 00:30:40.061 00:30:40.061 ' 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:40.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.061 --rc genhtml_branch_coverage=1 00:30:40.061 --rc genhtml_function_coverage=1 00:30:40.061 --rc genhtml_legend=1 00:30:40.061 --rc geninfo_all_blocks=1 00:30:40.061 --rc geninfo_unexecuted_blocks=1 00:30:40.061 00:30:40.061 ' 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:40.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.061 --rc genhtml_branch_coverage=1 00:30:40.061 --rc genhtml_function_coverage=1 00:30:40.061 --rc genhtml_legend=1 00:30:40.061 --rc geninfo_all_blocks=1 00:30:40.061 --rc geninfo_unexecuted_blocks=1 00:30:40.061 00:30:40.061 ' 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:40.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.061 --rc genhtml_branch_coverage=1 00:30:40.061 --rc genhtml_function_coverage=1 00:30:40.061 --rc genhtml_legend=1 00:30:40.061 --rc geninfo_all_blocks=1 00:30:40.061 --rc geninfo_unexecuted_blocks=1 00:30:40.061 00:30:40.061 ' 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:40.061 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:40.061 ************************************ 00:30:40.061 START TEST nvmf_abort 00:30:40.061 ************************************ 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:40.062 * Looking for test storage... 00:30:40.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:40.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.062 --rc genhtml_branch_coverage=1 00:30:40.062 --rc genhtml_function_coverage=1 00:30:40.062 --rc genhtml_legend=1 00:30:40.062 --rc geninfo_all_blocks=1 00:30:40.062 --rc geninfo_unexecuted_blocks=1 00:30:40.062 00:30:40.062 ' 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:40.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.062 --rc genhtml_branch_coverage=1 00:30:40.062 --rc genhtml_function_coverage=1 00:30:40.062 --rc genhtml_legend=1 00:30:40.062 --rc geninfo_all_blocks=1 00:30:40.062 --rc geninfo_unexecuted_blocks=1 00:30:40.062 00:30:40.062 ' 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:40.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.062 --rc genhtml_branch_coverage=1 00:30:40.062 --rc genhtml_function_coverage=1 00:30:40.062 --rc genhtml_legend=1 00:30:40.062 --rc geninfo_all_blocks=1 00:30:40.062 --rc geninfo_unexecuted_blocks=1 00:30:40.062 00:30:40.062 ' 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:40.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.062 --rc genhtml_branch_coverage=1 00:30:40.062 --rc genhtml_function_coverage=1 00:30:40.062 --rc genhtml_legend=1 00:30:40.062 --rc geninfo_all_blocks=1 00:30:40.062 --rc geninfo_unexecuted_blocks=1 00:30:40.062 00:30:40.062 ' 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.062 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.063 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.323 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:40.323 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:40.323 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:40.323 07:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.465 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.465 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:48.465 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:48.465 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:48.465 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:48.465 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:48.466 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:48.466 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:48.466 Found net devices under 0000:31:00.0: cvl_0_0 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:48.466 Found net devices under 0000:31:00.1: cvl_0_1 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:48.466 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:48.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:30:48.467 00:30:48.467 --- 10.0.0.2 ping statistics --- 00:30:48.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.467 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:30:48.467 00:30:48.467 --- 10.0.0.1 ping statistics --- 00:30:48.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.467 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1494791 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1494791 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 1494791 ']' 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:48.467 07:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.467 [2024-11-20 07:31:23.019058] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:48.467 [2024-11-20 07:31:23.020035] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:30:48.467 [2024-11-20 07:31:23.020072] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.467 [2024-11-20 07:31:23.121697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:48.467 [2024-11-20 07:31:23.157811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.467 [2024-11-20 07:31:23.157842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.467 [2024-11-20 07:31:23.157853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.467 [2024-11-20 07:31:23.157860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.467 [2024-11-20 07:31:23.157871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.467 [2024-11-20 07:31:23.159161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.467 [2024-11-20 07:31:23.159318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.467 [2024-11-20 07:31:23.159318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:48.467 [2024-11-20 07:31:23.215205] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:48.467 [2024-11-20 07:31:23.215237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:48.467 [2024-11-20 07:31:23.215775] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:48.467 [2024-11-20 07:31:23.216124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.409 [2024-11-20 07:31:23.856089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.409 Malloc0 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.409 Delay0 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.409 [2024-11-20 07:31:23.939935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.409 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:49.409 [2024-11-20 07:31:24.062536] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:51.953 Initializing NVMe Controllers 00:30:51.953 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:51.953 controller IO queue size 128 less than required 00:30:51.953 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:51.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:51.953 Initialization complete. Launching workers. 00:30:51.953 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28930 00:30:51.953 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28987, failed to submit 66 00:30:51.953 success 28930, unsuccessful 57, failed 0 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:51.953 rmmod nvme_tcp 00:30:51.953 rmmod nvme_fabrics 00:30:51.953 rmmod nvme_keyring 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1494791 ']' 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1494791 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 1494791 ']' 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 1494791 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1494791 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1494791' 00:30:51.953 killing process with pid 1494791 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 1494791 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 1494791 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.953 07:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.866 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:53.866 00:30:53.866 real 0m13.965s 00:30:53.866 user 0m11.205s 00:30:53.866 sys 0m7.297s 00:30:53.866 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:53.866 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:53.866 ************************************ 00:30:53.866 END TEST nvmf_abort 00:30:53.866 ************************************ 00:30:53.866 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:53.866 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:53.866 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:53.866 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:54.129 ************************************ 00:30:54.129 START TEST nvmf_ns_hotplug_stress 00:30:54.129 ************************************ 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:54.129 * Looking for test storage... 00:30:54.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:54.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.129 --rc genhtml_branch_coverage=1 00:30:54.129 --rc genhtml_function_coverage=1 00:30:54.129 --rc genhtml_legend=1 00:30:54.129 --rc geninfo_all_blocks=1 00:30:54.129 --rc geninfo_unexecuted_blocks=1 00:30:54.129 00:30:54.129 ' 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:54.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.129 --rc genhtml_branch_coverage=1 00:30:54.129 --rc genhtml_function_coverage=1 00:30:54.129 --rc genhtml_legend=1 00:30:54.129 --rc geninfo_all_blocks=1 00:30:54.129 --rc geninfo_unexecuted_blocks=1 00:30:54.129 00:30:54.129 ' 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:54.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.129 --rc genhtml_branch_coverage=1 00:30:54.129 --rc genhtml_function_coverage=1 00:30:54.129 --rc genhtml_legend=1 00:30:54.129 --rc geninfo_all_blocks=1 00:30:54.129 --rc geninfo_unexecuted_blocks=1 00:30:54.129 00:30:54.129 ' 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:54.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.129 --rc genhtml_branch_coverage=1 00:30:54.129 --rc genhtml_function_coverage=1 00:30:54.129 --rc genhtml_legend=1 00:30:54.129 --rc geninfo_all_blocks=1 00:30:54.129 --rc geninfo_unexecuted_blocks=1 00:30:54.129 00:30:54.129 ' 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.129 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.130 07:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:02.273 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:02.273 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:02.273 Found net devices under 0000:31:00.0: cvl_0_0 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:02.273 Found net devices under 0000:31:00.1: cvl_0_1 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:02.273 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:02.274 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:02.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:31:02.535 00:31:02.535 --- 10.0.0.2 ping statistics --- 00:31:02.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.535 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:02.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:31:02.535 00:31:02.535 --- 10.0.0.1 ping statistics --- 00:31:02.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.535 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1500156 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1500156 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 1500156 ']' 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:02.535 07:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:02.535 [2024-11-20 07:31:37.193628] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:02.535 [2024-11-20 07:31:37.194614] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:31:02.535 [2024-11-20 07:31:37.194654] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.535 [2024-11-20 07:31:37.295357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:02.796 [2024-11-20 07:31:37.331115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.796 [2024-11-20 07:31:37.331148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.796 [2024-11-20 07:31:37.331156] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.796 [2024-11-20 07:31:37.331162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.796 [2024-11-20 07:31:37.331169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.796 [2024-11-20 07:31:37.332464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:02.796 [2024-11-20 07:31:37.332618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.796 [2024-11-20 07:31:37.332619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:02.796 [2024-11-20 07:31:37.388323] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:02.797 [2024-11-20 07:31:37.388378] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:02.797 [2024-11-20 07:31:37.388881] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:02.797 [2024-11-20 07:31:37.389232] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:03.370 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:03.370 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:31:03.370 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:03.370 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:03.370 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:03.370 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:03.370 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:03.370 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:03.631 [2024-11-20 07:31:38.217434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.631 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:03.892 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.892 [2024-11-20 07:31:38.581908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.892 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:04.153 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:04.414 Malloc0 00:31:04.414 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:04.414 Delay0 00:31:04.414 07:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.675 07:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:04.935 NULL1 00:31:04.935 07:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:04.935 07:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1500669 00:31:04.935 07:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:04.935 07:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:04.935 07:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.196 07:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.457 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:05.457 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:05.719 true 00:31:05.719 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:05.719 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.719 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.980 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:05.980 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:06.241 true 00:31:06.241 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:06.241 07:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.502 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.502 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:06.502 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:06.763 true 00:31:06.763 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:06.763 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.023 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.285 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:07.285 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:07.285 true 00:31:07.285 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:07.285 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.546 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.811 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:07.811 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:07.811 true 00:31:07.811 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:07.811 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.073 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.334 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:08.335 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:08.335 true 00:31:08.335 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:08.335 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.595 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.856 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:08.856 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:08.856 true 00:31:09.116 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:09.116 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.116 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.376 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:09.376 07:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:09.637 true 00:31:09.637 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:09.637 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.637 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.897 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:09.897 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:10.157 true 00:31:10.157 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:10.157 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.157 07:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.418 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:10.418 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:10.679 true 00:31:10.679 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:10.679 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.939 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.939 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:10.940 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:11.200 true 00:31:11.200 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:11.200 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.461 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.461 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:11.461 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:11.722 true 00:31:11.722 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:11.722 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.983 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.983 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:11.983 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:12.244 true 00:31:12.244 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:12.244 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.504 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.504 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:12.504 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:12.765 true 00:31:12.765 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:12.765 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.025 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.285 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:13.285 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:13.285 true 00:31:13.285 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:13.285 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.547 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.808 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:13.808 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:13.808 true 00:31:13.808 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:13.808 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.069 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.329 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:14.329 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:14.329 true 00:31:14.590 07:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:14.590 07:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.590 07:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.850 07:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:14.850 07:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:15.110 true 00:31:15.110 07:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:15.110 07:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.110 07:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.371 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:15.371 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:15.632 true 00:31:15.632 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:15.632 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.632 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.892 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:15.892 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:16.152 true 00:31:16.152 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:16.152 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.412 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.412 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:16.412 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:16.673 true 00:31:16.673 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:16.673 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.935 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.935 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:16.935 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:17.195 true 00:31:17.195 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:17.195 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.456 07:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.717 07:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:17.717 07:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:17.717 true 00:31:17.717 07:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:17.717 07:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.977 07:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.238 07:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:18.238 07:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:18.238 true 00:31:18.238 07:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:18.238 07:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.549 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.835 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:18.836 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:18.836 true 00:31:18.836 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:18.836 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.125 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.386 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:19.386 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:19.386 true 00:31:19.386 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:19.386 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.646 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.907 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:19.907 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:19.907 true 00:31:19.907 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:19.907 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.167 07:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.428 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:20.428 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:20.428 true 00:31:20.688 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:20.688 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.688 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:20.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:21.209 true 00:31:21.209 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:21.209 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.209 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.470 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:21.470 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:21.730 true 00:31:21.730 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:21.730 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.991 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.991 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:21.991 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:22.252 true 00:31:22.252 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:22.252 07:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.513 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.513 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:22.513 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:22.774 true 00:31:22.774 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:22.774 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.035 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.035 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:23.035 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:31:23.296 true 00:31:23.296 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:23.296 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.556 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.817 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:31:23.817 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:31:23.817 true 00:31:23.817 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:23.817 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.077 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.337 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:31:24.337 07:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:31:24.337 true 00:31:24.337 07:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:24.337 07:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.599 07:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.859 07:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:31:24.859 07:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:31:24.859 true 00:31:25.120 07:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:25.120 07:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.120 07:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.380 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:31:25.380 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:31:25.641 true 00:31:25.641 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:25.641 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.641 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.901 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:31:25.901 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:31:26.162 true 00:31:26.162 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:26.162 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.423 07:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:26.423 07:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:31:26.423 07:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:31:26.683 true 00:31:26.683 07:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:26.683 07:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.943 07:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:26.943 07:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:31:26.943 07:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:31:27.203 true 00:31:27.203 07:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:27.203 07:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.463 07:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.724 07:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:27.724 07:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:27.724 true 00:31:27.724 07:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:27.724 07:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.984 07:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.245 07:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:31:28.245 07:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:31:28.245 true 00:31:28.245 07:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:28.245 07:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.506 07:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.767 07:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:31:28.767 07:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:31:28.767 true 00:31:28.767 07:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:28.767 07:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.029 07:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:29.289 07:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:31:29.289 07:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:31:29.289 true 00:31:29.550 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:29.550 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.550 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:29.811 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:31:29.811 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:31:30.072 true 00:31:30.072 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:30.072 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.072 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.334 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:31:30.334 07:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:31:30.595 true 00:31:30.595 07:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:30.595 07:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.856 07:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.856 07:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:31:30.856 07:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:31:31.117 true 00:31:31.117 07:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:31.117 07:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.378 07:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:31.378 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:31:31.378 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:31:31.639 true 00:31:31.639 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:31.639 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.900 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.161 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:31:32.161 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:31:32.161 true 00:31:32.161 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:32.161 07:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.422 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.683 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:31:32.683 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:31:32.683 true 00:31:32.684 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:32.684 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.945 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.206 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:31:33.206 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:31:33.206 true 00:31:33.206 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:33.206 07:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.467 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.728 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:31:33.728 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:31:33.728 true 00:31:33.989 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:33.990 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.990 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.250 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:31:34.250 07:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:31:34.512 true 00:31:34.512 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:34.512 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.512 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.773 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:31:34.773 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:31:35.034 true 00:31:35.034 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:35.034 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.295 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:35.295 Initializing NVMe Controllers 00:31:35.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:35.295 Controller IO queue size 128, less than required. 00:31:35.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:35.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:35.295 Initialization complete. Launching workers. 00:31:35.295 ======================================================== 00:31:35.295 Latency(us) 00:31:35.295 Device Information : IOPS MiB/s Average min max 00:31:35.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29906.90 14.60 4279.83 1480.86 10885.72 00:31:35.295 ======================================================== 00:31:35.295 Total : 29906.90 14.60 4279.83 1480.86 10885.72 00:31:35.295 00:31:35.295 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:31:35.295 07:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:31:35.556 true 00:31:35.556 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1500669 00:31:35.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1500669) - No such process 00:31:35.556 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1500669 00:31:35.556 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.817 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:35.817 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:35.817 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:35.817 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:35.817 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:35.817 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:36.078 null0 00:31:36.078 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:36.078 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:36.078 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:36.339 null1 00:31:36.339 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:36.339 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:36.339 07:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:36.339 null2 00:31:36.339 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:36.339 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:36.339 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:36.600 null3 00:31:36.600 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:36.600 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:36.600 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:36.600 null4 00:31:36.861 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:36.861 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:36.861 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:36.861 null5 00:31:36.861 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:36.861 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:36.861 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:37.124 null6 00:31:37.124 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:37.124 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:37.124 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:37.124 null7 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:37.385 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:37.386 07:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1507020 1507023 1507026 1507028 1507031 1507034 1507037 1507040 00:31:37.386 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:37.386 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:37.386 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:37.386 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:37.386 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:37.386 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.647 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:37.647 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:37.647 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.647 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.647 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.648 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:37.909 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:37.909 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:37.909 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:37.909 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:37.909 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:37.909 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:37.909 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.909 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:37.909 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:37.909 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:37.909 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:38.169 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.169 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.169 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.169 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:38.170 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:38.431 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.431 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.431 07:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:38.431 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:38.692 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:38.692 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:38.692 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:38.692 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:38.692 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:38.692 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.692 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.692 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.692 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:38.692 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:38.693 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.693 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.693 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:38.693 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.693 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.693 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:38.954 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.216 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.217 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:39.217 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.217 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.217 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:39.217 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.217 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.217 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:39.217 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:39.479 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.479 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.479 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:39.479 07:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.479 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:39.741 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.003 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:40.265 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:40.265 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:40.265 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:40.265 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.265 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.265 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:40.265 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:40.266 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.266 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.266 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:40.266 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:40.266 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.266 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.266 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:40.266 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.266 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.266 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:40.266 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.266 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.266 07:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:40.266 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:40.527 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:40.789 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:41.051 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:41.051 rmmod nvme_tcp 00:31:41.312 rmmod nvme_fabrics 00:31:41.312 rmmod nvme_keyring 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1500156 ']' 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1500156 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 1500156 ']' 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 1500156 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1500156 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1500156' 00:31:41.312 killing process with pid 1500156 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 1500156 00:31:41.312 07:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 1500156 00:31:41.312 07:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:41.312 07:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:41.312 07:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:41.312 07:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:41.312 07:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:41.312 07:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:41.312 07:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:41.574 07:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:41.574 07:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:41.574 07:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.574 07:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:41.574 07:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.486 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:43.486 00:31:43.486 real 0m49.510s 00:31:43.486 user 3m3.834s 00:31:43.486 sys 0m22.291s 00:31:43.486 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:43.486 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:43.486 ************************************ 00:31:43.486 END TEST nvmf_ns_hotplug_stress 00:31:43.486 ************************************ 00:31:43.486 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:43.486 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:43.486 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:43.486 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:43.486 ************************************ 00:31:43.486 START TEST nvmf_delete_subsystem 00:31:43.486 ************************************ 00:31:43.486 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:43.762 * Looking for test storage... 00:31:43.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:43.762 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:43.762 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:31:43.762 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:43.762 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:43.762 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:43.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.763 --rc genhtml_branch_coverage=1 00:31:43.763 --rc genhtml_function_coverage=1 00:31:43.763 --rc genhtml_legend=1 00:31:43.763 --rc geninfo_all_blocks=1 00:31:43.763 --rc geninfo_unexecuted_blocks=1 00:31:43.763 00:31:43.763 ' 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:43.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.763 --rc genhtml_branch_coverage=1 00:31:43.763 --rc genhtml_function_coverage=1 00:31:43.763 --rc genhtml_legend=1 00:31:43.763 --rc geninfo_all_blocks=1 00:31:43.763 --rc geninfo_unexecuted_blocks=1 00:31:43.763 00:31:43.763 ' 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:43.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.763 --rc genhtml_branch_coverage=1 00:31:43.763 --rc genhtml_function_coverage=1 00:31:43.763 --rc genhtml_legend=1 00:31:43.763 --rc geninfo_all_blocks=1 00:31:43.763 --rc geninfo_unexecuted_blocks=1 00:31:43.763 00:31:43.763 ' 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:43.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.763 --rc genhtml_branch_coverage=1 00:31:43.763 --rc genhtml_function_coverage=1 00:31:43.763 --rc genhtml_legend=1 00:31:43.763 --rc geninfo_all_blocks=1 00:31:43.763 --rc geninfo_unexecuted_blocks=1 00:31:43.763 00:31:43.763 ' 00:31:43.763 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:43.764 07:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:51.929 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.929 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:51.930 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:51.930 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:51.930 Found net devices under 0000:31:00.0: cvl_0_0 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:51.930 Found net devices under 0000:31:00.1: cvl_0_1 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.930 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:51.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:31:51.931 00:31:51.931 --- 10.0.0.2 ping statistics --- 00:31:51.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.931 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:31:51.931 00:31:51.931 --- 10.0.0.1 ping statistics --- 00:31:51.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.931 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1512541 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1512541 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 1512541 ']' 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:51.931 07:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:51.931 [2024-11-20 07:32:26.541020] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:51.931 [2024-11-20 07:32:26.542049] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:31:51.931 [2024-11-20 07:32:26.542089] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.931 [2024-11-20 07:32:26.627733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:51.931 [2024-11-20 07:32:26.663619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.931 [2024-11-20 07:32:26.663651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.931 [2024-11-20 07:32:26.663659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:51.931 [2024-11-20 07:32:26.663666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:51.931 [2024-11-20 07:32:26.663671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.931 [2024-11-20 07:32:26.664833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.931 [2024-11-20 07:32:26.664835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.190 [2024-11-20 07:32:26.721183] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:52.190 [2024-11-20 07:32:26.721762] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:52.190 [2024-11-20 07:32:26.722103] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:52.759 [2024-11-20 07:32:27.361543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:52.759 [2024-11-20 07:32:27.385754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.759 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:52.759 NULL1 00:31:52.760 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.760 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:52.760 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.760 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:52.760 Delay0 00:31:52.760 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.760 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:52.760 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.760 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:52.760 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.760 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1512643 00:31:52.760 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:52.760 07:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:52.760 [2024-11-20 07:32:27.490361] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:55.297 07:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:55.297 07:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.297 07:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 starting I/O failed: -6 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 starting I/O failed: -6 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 starting I/O failed: -6 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 starting I/O failed: -6 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 starting I/O failed: -6 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 starting I/O failed: -6 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 starting I/O failed: -6 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 starting I/O failed: -6 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 starting I/O failed: -6 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 [2024-11-20 07:32:29.580160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae4a0 is same with the state(6) to be set 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Write completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.297 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 [2024-11-20 07:32:29.580615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfadf00 is same with the state(6) to be set 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 starting I/O failed: -6 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 starting I/O failed: -6 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 starting I/O failed: -6 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 starting I/O failed: -6 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 starting I/O failed: -6 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 starting I/O failed: -6 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 starting I/O failed: -6 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 starting I/O failed: -6 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 starting I/O failed: -6 00:31:55.298 Write completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 starting I/O failed: -6 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 Read completed with error (sct=0, sc=8) 00:31:55.298 [2024-11-20 07:32:29.581326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f685000d4b0 is same with the state(6) to be set 00:31:55.868 [2024-11-20 07:32:30.547273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaf5e0 is same with the state(6) to be set 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 [2024-11-20 07:32:30.581482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f685000d020 is same with the state(6) to be set 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 [2024-11-20 07:32:30.581605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6850000c40 is same with the state(6) to be set 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 [2024-11-20 07:32:30.581714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f685000d7e0 is same with the state(6) to be set 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Write completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 Read completed with error (sct=0, sc=8) 00:31:55.869 [2024-11-20 07:32:30.583692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae2c0 is same with the state(6) to be set 00:31:55.869 Initializing NVMe Controllers 00:31:55.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:55.869 Controller IO queue size 128, less than required. 00:31:55.869 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:55.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:55.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:55.869 Initialization complete. Launching workers. 00:31:55.869 ======================================================== 00:31:55.869 Latency(us) 00:31:55.869 Device Information : IOPS MiB/s Average min max 00:31:55.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.21 0.08 881673.63 234.03 1008355.29 00:31:55.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.66 0.08 1064620.71 506.51 1999327.41 00:31:55.869 ======================================================== 00:31:55.869 Total : 316.87 0.15 975588.38 234.03 1999327.41 00:31:55.869 00:31:55.869 [2024-11-20 07:32:30.584227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfaf5e0 (9): Bad file descriptor 00:31:55.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:55.869 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.869 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:55.869 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1512643 00:31:55.869 07:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1512643 00:31:56.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1512643) - No such process 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1512643 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1512643 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1512643 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:56.440 [2024-11-20 07:32:31.117795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1513388 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1513388 00:31:56.440 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:56.440 [2024-11-20 07:32:31.188883] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:57.009 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:57.009 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1513388 00:31:57.009 07:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:57.579 07:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:57.579 07:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1513388 00:31:57.579 07:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:58.148 07:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:58.148 07:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1513388 00:31:58.148 07:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:58.408 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:58.408 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1513388 00:31:58.408 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:58.978 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:58.978 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1513388 00:31:58.978 07:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:59.548 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:59.548 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1513388 00:31:59.548 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:59.808 Initializing NVMe Controllers 00:31:59.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:59.808 Controller IO queue size 128, less than required. 00:31:59.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:59.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:59.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:59.808 Initialization complete. Launching workers. 00:31:59.808 ======================================================== 00:31:59.808 Latency(us) 00:31:59.808 Device Information : IOPS MiB/s Average min max 00:31:59.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002239.93 1000331.15 1005488.56 00:31:59.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004391.70 1000170.78 1042222.42 00:31:59.808 ======================================================== 00:31:59.808 Total : 256.00 0.12 1003315.82 1000170.78 1042222.42 00:31:59.808 00:32:00.068 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:00.068 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1513388 00:32:00.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1513388) - No such process 00:32:00.068 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1513388 00:32:00.068 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:00.068 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:00.068 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:00.068 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:00.068 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:00.068 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:00.068 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:00.068 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:00.068 rmmod nvme_tcp 00:32:00.068 rmmod nvme_fabrics 00:32:00.068 rmmod nvme_keyring 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1512541 ']' 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1512541 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 1512541 ']' 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 1512541 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1512541 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1512541' 00:32:00.069 killing process with pid 1512541 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 1512541 00:32:00.069 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 1512541 00:32:00.329 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:00.329 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:00.329 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:00.329 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:00.329 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:32:00.329 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:00.329 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:32:00.329 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:00.329 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:00.329 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.329 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.329 07:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:02.870 00:32:02.870 real 0m18.783s 00:32:02.870 user 0m26.692s 00:32:02.870 sys 0m7.768s 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:02.870 ************************************ 00:32:02.870 END TEST nvmf_delete_subsystem 00:32:02.870 ************************************ 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:02.870 ************************************ 00:32:02.870 START TEST nvmf_host_management 00:32:02.870 ************************************ 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:02.870 * Looking for test storage... 00:32:02.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:02.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.870 --rc genhtml_branch_coverage=1 00:32:02.870 --rc genhtml_function_coverage=1 00:32:02.870 --rc genhtml_legend=1 00:32:02.870 --rc geninfo_all_blocks=1 00:32:02.870 --rc geninfo_unexecuted_blocks=1 00:32:02.870 00:32:02.870 ' 00:32:02.870 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:02.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.870 --rc genhtml_branch_coverage=1 00:32:02.870 --rc genhtml_function_coverage=1 00:32:02.870 --rc genhtml_legend=1 00:32:02.870 --rc geninfo_all_blocks=1 00:32:02.870 --rc geninfo_unexecuted_blocks=1 00:32:02.870 00:32:02.870 ' 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:02.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.871 --rc genhtml_branch_coverage=1 00:32:02.871 --rc genhtml_function_coverage=1 00:32:02.871 --rc genhtml_legend=1 00:32:02.871 --rc geninfo_all_blocks=1 00:32:02.871 --rc geninfo_unexecuted_blocks=1 00:32:02.871 00:32:02.871 ' 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:02.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.871 --rc genhtml_branch_coverage=1 00:32:02.871 --rc genhtml_function_coverage=1 00:32:02.871 --rc genhtml_legend=1 00:32:02.871 --rc geninfo_all_blocks=1 00:32:02.871 --rc geninfo_unexecuted_blocks=1 00:32:02.871 00:32:02.871 ' 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:02.871 07:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:11.097 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:11.098 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:11.098 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:11.098 Found net devices under 0000:31:00.0: cvl_0_0 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:11.098 Found net devices under 0000:31:00.1: cvl_0_1 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:11.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:11.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:32:11.098 00:32:11.098 --- 10.0.0.2 ping statistics --- 00:32:11.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.098 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:11.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:11.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:32:11.098 00:32:11.098 --- 10.0.0.1 ping statistics --- 00:32:11.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.098 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:11.098 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:11.099 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:11.099 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:11.099 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:11.099 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:11.099 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1518726 00:32:11.099 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1518726 00:32:11.099 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:11.099 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1518726 ']' 00:32:11.099 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.099 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:11.099 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.099 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:11.099 07:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:11.099 [2024-11-20 07:32:45.595105] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:11.099 [2024-11-20 07:32:45.596370] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:32:11.099 [2024-11-20 07:32:45.596416] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:11.099 [2024-11-20 07:32:45.701326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:11.099 [2024-11-20 07:32:45.739658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:11.099 [2024-11-20 07:32:45.739697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:11.099 [2024-11-20 07:32:45.739705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:11.099 [2024-11-20 07:32:45.739712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:11.099 [2024-11-20 07:32:45.739718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:11.099 [2024-11-20 07:32:45.741290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:11.099 [2024-11-20 07:32:45.741446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:11.099 [2024-11-20 07:32:45.741714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.099 [2024-11-20 07:32:45.741714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:11.099 [2024-11-20 07:32:45.800950] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:11.099 [2024-11-20 07:32:45.801542] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:11.099 [2024-11-20 07:32:45.801585] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:11.099 [2024-11-20 07:32:45.802280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:11.099 [2024-11-20 07:32:45.802465] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:11.671 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:11.671 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:32:11.671 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:11.671 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:11.671 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:11.671 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:11.671 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:11.671 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.671 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:11.671 [2024-11-20 07:32:46.414472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:11.932 Malloc0 00:32:11.932 [2024-11-20 07:32:46.494729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1518976 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1518976 /var/tmp/bdevperf.sock 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1518976 ']' 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:11.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:11.932 { 00:32:11.932 "params": { 00:32:11.932 "name": "Nvme$subsystem", 00:32:11.932 "trtype": "$TEST_TRANSPORT", 00:32:11.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:11.932 "adrfam": "ipv4", 00:32:11.932 "trsvcid": "$NVMF_PORT", 00:32:11.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:11.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:11.932 "hdgst": ${hdgst:-false}, 00:32:11.932 "ddgst": ${ddgst:-false} 00:32:11.932 }, 00:32:11.932 "method": "bdev_nvme_attach_controller" 00:32:11.932 } 00:32:11.932 EOF 00:32:11.932 )") 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:11.932 07:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:11.932 "params": { 00:32:11.932 "name": "Nvme0", 00:32:11.932 "trtype": "tcp", 00:32:11.932 "traddr": "10.0.0.2", 00:32:11.932 "adrfam": "ipv4", 00:32:11.932 "trsvcid": "4420", 00:32:11.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:11.932 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:11.932 "hdgst": false, 00:32:11.932 "ddgst": false 00:32:11.932 }, 00:32:11.932 "method": "bdev_nvme_attach_controller" 00:32:11.932 }' 00:32:11.932 [2024-11-20 07:32:46.596482] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:32:11.932 [2024-11-20 07:32:46.596532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518976 ] 00:32:11.932 [2024-11-20 07:32:46.674221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.193 [2024-11-20 07:32:46.710823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.453 Running I/O for 10 seconds... 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.716 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:12.716 [2024-11-20 07:32:47.470120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030800 is same with the state(6) to be set 00:32:12.716 [2024-11-20 07:32:47.470165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030800 is same with the state(6) to be set 00:32:12.716 [2024-11-20 07:32:47.470396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.716 [2024-11-20 07:32:47.470434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.716 [2024-11-20 07:32:47.470452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.716 [2024-11-20 07:32:47.470466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.716 [2024-11-20 07:32:47.470476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.716 [2024-11-20 07:32:47.470484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.716 [2024-11-20 07:32:47.470494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.716 [2024-11-20 07:32:47.470501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.716 [2024-11-20 07:32:47.470510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.716 [2024-11-20 07:32:47.470518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.716 [2024-11-20 07:32:47.470527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.470986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.470994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.471004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.471011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.471021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.471028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.471038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.471045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.471054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.471062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.471071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.471079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.471088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.471095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.471106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.471113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.471123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.471130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.471139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.471147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.471156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.471164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.717 [2024-11-20 07:32:47.471173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.717 [2024-11-20 07:32:47.471180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:12.718 [2024-11-20 07:32:47.471514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.471524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a6160 is same with the state(6) to be set 00:32:12.718 [2024-11-20 07:32:47.472782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:12.718 task offset: 96768 on job bdev=Nvme0n1 fails 00:32:12.718 00:32:12.718 Latency(us) 00:32:12.718 [2024-11-20T06:32:47.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.718 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:12.718 Job: Nvme0n1 ended in about 0.46 seconds with error 00:32:12.718 Verification LBA range: start 0x0 length 0x400 00:32:12.718 Nvme0n1 : 0.46 1570.05 98.13 140.54 0.00 36350.19 1788.59 31675.73 00:32:12.718 [2024-11-20T06:32:47.485Z] =================================================================================================================== 00:32:12.718 [2024-11-20T06:32:47.485Z] Total : 1570.05 98.13 140.54 0.00 36350.19 1788.59 31675.73 00:32:12.718 [2024-11-20 07:32:47.474789] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:12.718 [2024-11-20 07:32:47.474811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x795b00 (9): Bad file descriptor 00:32:12.718 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.718 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:12.718 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.718 [2024-11-20 07:32:47.476047] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:32:12.718 [2024-11-20 07:32:47.476122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:12.718 [2024-11-20 07:32:47.476144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.718 [2024-11-20 07:32:47.476160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:32:12.718 [2024-11-20 07:32:47.476168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:32:12.718 [2024-11-20 07:32:47.476176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:12.718 [2024-11-20 07:32:47.476183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x795b00 00:32:12.718 [2024-11-20 07:32:47.476203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x795b00 (9): Bad file descriptor 00:32:12.718 [2024-11-20 07:32:47.476215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:12.718 [2024-11-20 07:32:47.476225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:12.718 [2024-11-20 07:32:47.476234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:12.718 [2024-11-20 07:32:47.476243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:12.718 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:12.979 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.979 07:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:13.919 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1518976 00:32:13.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1518976) - No such process 00:32:13.919 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:13.919 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:13.919 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:13.919 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:13.919 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:13.919 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:13.919 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:13.919 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:13.919 { 00:32:13.919 "params": { 00:32:13.919 "name": "Nvme$subsystem", 00:32:13.919 "trtype": "$TEST_TRANSPORT", 00:32:13.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.919 "adrfam": "ipv4", 00:32:13.919 "trsvcid": "$NVMF_PORT", 00:32:13.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.919 "hdgst": ${hdgst:-false}, 00:32:13.919 "ddgst": ${ddgst:-false} 00:32:13.919 }, 00:32:13.919 "method": "bdev_nvme_attach_controller" 00:32:13.919 } 00:32:13.919 EOF 00:32:13.919 )") 00:32:13.919 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:13.919 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:13.919 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:13.919 07:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:13.919 "params": { 00:32:13.919 "name": "Nvme0", 00:32:13.919 "trtype": "tcp", 00:32:13.919 "traddr": "10.0.0.2", 00:32:13.919 "adrfam": "ipv4", 00:32:13.919 "trsvcid": "4420", 00:32:13.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:13.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:13.919 "hdgst": false, 00:32:13.919 "ddgst": false 00:32:13.919 }, 00:32:13.919 "method": "bdev_nvme_attach_controller" 00:32:13.919 }' 00:32:13.919 [2024-11-20 07:32:48.547982] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:32:13.919 [2024-11-20 07:32:48.548037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519330 ] 00:32:13.919 [2024-11-20 07:32:48.625544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.919 [2024-11-20 07:32:48.660770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.179 Running I/O for 1 seconds... 00:32:15.119 1869.00 IOPS, 116.81 MiB/s 00:32:15.119 Latency(us) 00:32:15.119 [2024-11-20T06:32:49.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.119 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:15.119 Verification LBA range: start 0x0 length 0x400 00:32:15.119 Nvme0n1 : 1.01 1914.19 119.64 0.00 0.00 32717.98 1617.92 36481.71 00:32:15.119 [2024-11-20T06:32:49.886Z] =================================================================================================================== 00:32:15.119 [2024-11-20T06:32:49.886Z] Total : 1914.19 119.64 0.00 0.00 32717.98 1617.92 36481.71 00:32:15.379 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:15.379 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:15.379 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:15.379 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:15.379 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:15.379 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:15.379 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:15.379 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:15.379 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:15.379 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:15.379 07:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:15.379 rmmod nvme_tcp 00:32:15.379 rmmod nvme_fabrics 00:32:15.379 rmmod nvme_keyring 00:32:15.379 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:15.379 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:15.379 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:15.379 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1518726 ']' 00:32:15.379 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1518726 00:32:15.379 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 1518726 ']' 00:32:15.379 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 1518726 00:32:15.379 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:32:15.379 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:15.379 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1518726 00:32:15.379 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:15.379 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:15.380 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1518726' 00:32:15.380 killing process with pid 1518726 00:32:15.380 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 1518726 00:32:15.380 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 1518726 00:32:15.640 [2024-11-20 07:32:50.249095] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:15.640 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:15.640 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:15.640 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:15.640 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:15.640 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:15.640 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:15.640 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:15.640 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:15.640 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:15.640 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.640 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.640 07:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:18.187 00:32:18.187 real 0m15.269s 00:32:18.187 user 0m19.135s 00:32:18.187 sys 0m7.913s 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:18.187 ************************************ 00:32:18.187 END TEST nvmf_host_management 00:32:18.187 ************************************ 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:18.187 ************************************ 00:32:18.187 START TEST nvmf_lvol 00:32:18.187 ************************************ 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:18.187 * Looking for test storage... 00:32:18.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:18.187 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:18.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.188 --rc genhtml_branch_coverage=1 00:32:18.188 --rc genhtml_function_coverage=1 00:32:18.188 --rc genhtml_legend=1 00:32:18.188 --rc geninfo_all_blocks=1 00:32:18.188 --rc geninfo_unexecuted_blocks=1 00:32:18.188 00:32:18.188 ' 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:18.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.188 --rc genhtml_branch_coverage=1 00:32:18.188 --rc genhtml_function_coverage=1 00:32:18.188 --rc genhtml_legend=1 00:32:18.188 --rc geninfo_all_blocks=1 00:32:18.188 --rc geninfo_unexecuted_blocks=1 00:32:18.188 00:32:18.188 ' 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:18.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.188 --rc genhtml_branch_coverage=1 00:32:18.188 --rc genhtml_function_coverage=1 00:32:18.188 --rc genhtml_legend=1 00:32:18.188 --rc geninfo_all_blocks=1 00:32:18.188 --rc geninfo_unexecuted_blocks=1 00:32:18.188 00:32:18.188 ' 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:18.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.188 --rc genhtml_branch_coverage=1 00:32:18.188 --rc genhtml_function_coverage=1 00:32:18.188 --rc genhtml_legend=1 00:32:18.188 --rc geninfo_all_blocks=1 00:32:18.188 --rc geninfo_unexecuted_blocks=1 00:32:18.188 00:32:18.188 ' 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.188 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:18.189 07:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:26.324 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:26.324 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:26.324 Found net devices under 0000:31:00.0: cvl_0_0 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:26.324 Found net devices under 0000:31:00.1: cvl_0_1 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:26.324 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:26.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:32:26.325 00:32:26.325 --- 10.0.0.2 ping statistics --- 00:32:26.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.325 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:32:26.325 00:32:26.325 --- 10.0.0.1 ping statistics --- 00:32:26.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.325 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1524394 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1524394 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 1524394 ']' 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:26.325 07:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:26.325 [2024-11-20 07:33:00.985690] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:26.325 [2024-11-20 07:33:00.986699] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:32:26.325 [2024-11-20 07:33:00.986738] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:26.325 [2024-11-20 07:33:01.074001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:26.586 [2024-11-20 07:33:01.113434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:26.586 [2024-11-20 07:33:01.113470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:26.586 [2024-11-20 07:33:01.113478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:26.586 [2024-11-20 07:33:01.113485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:26.586 [2024-11-20 07:33:01.113490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:26.586 [2024-11-20 07:33:01.114937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.586 [2024-11-20 07:33:01.114983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:26.586 [2024-11-20 07:33:01.114988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.586 [2024-11-20 07:33:01.172618] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:26.586 [2024-11-20 07:33:01.173100] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:26.586 [2024-11-20 07:33:01.173309] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:26.586 [2024-11-20 07:33:01.173618] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:27.160 07:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:27.160 07:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:32:27.160 07:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:27.160 07:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:27.160 07:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:27.160 07:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.160 07:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:27.420 [2024-11-20 07:33:01.979691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.420 07:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:27.681 07:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:27.681 07:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:27.681 07:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:27.681 07:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:27.941 07:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:28.201 07:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=40086c9f-e88c-48ad-b202-e26c740ddf13 00:32:28.201 07:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 40086c9f-e88c-48ad-b202-e26c740ddf13 lvol 20 00:32:28.201 07:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=df503b6e-a79c-4f7b-847f-61c7f2d76023 00:32:28.201 07:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:28.461 07:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 df503b6e-a79c-4f7b-847f-61c7f2d76023 00:32:28.722 07:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:28.722 [2024-11-20 07:33:03.423843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.722 07:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:28.983 07:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:28.983 07:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1524964 00:32:28.983 07:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:29.924 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot df503b6e-a79c-4f7b-847f-61c7f2d76023 MY_SNAPSHOT 00:32:30.185 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=450f4f1f-47b0-4124-bbdc-d4500e58d07f 00:32:30.185 07:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize df503b6e-a79c-4f7b-847f-61c7f2d76023 30 00:32:30.445 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 450f4f1f-47b0-4124-bbdc-d4500e58d07f MY_CLONE 00:32:30.706 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b0e2a1c1-eb2c-442b-8110-beb9b4b321c9 00:32:30.706 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b0e2a1c1-eb2c-442b-8110-beb9b4b321c9 00:32:30.966 07:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1524964 00:32:40.970 Initializing NVMe Controllers 00:32:40.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:40.970 Controller IO queue size 128, less than required. 00:32:40.970 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:40.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:40.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:40.970 Initialization complete. Launching workers. 00:32:40.970 ======================================================== 00:32:40.970 Latency(us) 00:32:40.970 Device Information : IOPS MiB/s Average min max 00:32:40.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12280.87 47.97 10428.43 1619.21 49392.40 00:32:40.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14946.64 58.39 8564.90 3812.04 58880.74 00:32:40.970 ======================================================== 00:32:40.970 Total : 27227.52 106.36 9405.44 1619.21 58880.74 00:32:40.970 00:32:40.970 07:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete df503b6e-a79c-4f7b-847f-61c7f2d76023 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 40086c9f-e88c-48ad-b202-e26c740ddf13 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:40.970 rmmod nvme_tcp 00:32:40.970 rmmod nvme_fabrics 00:32:40.970 rmmod nvme_keyring 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1524394 ']' 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1524394 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 1524394 ']' 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 1524394 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1524394 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1524394' 00:32:40.970 killing process with pid 1524394 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 1524394 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 1524394 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.970 07:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.357 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:42.357 00:32:42.357 real 0m24.384s 00:32:42.357 user 0m55.788s 00:32:42.357 sys 0m10.998s 00:32:42.357 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:42.357 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:42.357 ************************************ 00:32:42.357 END TEST nvmf_lvol 00:32:42.357 ************************************ 00:32:42.357 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:42.357 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:42.357 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:42.357 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:42.357 ************************************ 00:32:42.357 START TEST nvmf_lvs_grow 00:32:42.357 ************************************ 00:32:42.357 07:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:42.357 * Looking for test storage... 00:32:42.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:42.357 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:42.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.358 --rc genhtml_branch_coverage=1 00:32:42.358 --rc genhtml_function_coverage=1 00:32:42.358 --rc genhtml_legend=1 00:32:42.358 --rc geninfo_all_blocks=1 00:32:42.358 --rc geninfo_unexecuted_blocks=1 00:32:42.358 00:32:42.358 ' 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:42.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.358 --rc genhtml_branch_coverage=1 00:32:42.358 --rc genhtml_function_coverage=1 00:32:42.358 --rc genhtml_legend=1 00:32:42.358 --rc geninfo_all_blocks=1 00:32:42.358 --rc geninfo_unexecuted_blocks=1 00:32:42.358 00:32:42.358 ' 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:42.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.358 --rc genhtml_branch_coverage=1 00:32:42.358 --rc genhtml_function_coverage=1 00:32:42.358 --rc genhtml_legend=1 00:32:42.358 --rc geninfo_all_blocks=1 00:32:42.358 --rc geninfo_unexecuted_blocks=1 00:32:42.358 00:32:42.358 ' 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:42.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.358 --rc genhtml_branch_coverage=1 00:32:42.358 --rc genhtml_function_coverage=1 00:32:42.358 --rc genhtml_legend=1 00:32:42.358 --rc geninfo_all_blocks=1 00:32:42.358 --rc geninfo_unexecuted_blocks=1 00:32:42.358 00:32:42.358 ' 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:42.358 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:42.620 07:33:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:50.799 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:50.799 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.799 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:50.800 Found net devices under 0000:31:00.0: cvl_0_0 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:50.800 Found net devices under 0000:31:00.1: cvl_0_1 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.800 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:51.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:51.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:32:51.061 00:32:51.061 --- 10.0.0.2 ping statistics --- 00:32:51.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.061 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:51.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:51.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:32:51.061 00:32:51.061 --- 10.0.0.1 ping statistics --- 00:32:51.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.061 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1532286 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1532286 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 1532286 ']' 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:51.061 07:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:51.061 [2024-11-20 07:33:25.802443] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:51.061 [2024-11-20 07:33:25.803484] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:32:51.061 [2024-11-20 07:33:25.803524] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.323 [2024-11-20 07:33:25.889203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.324 [2024-11-20 07:33:25.923516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.324 [2024-11-20 07:33:25.923550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.324 [2024-11-20 07:33:25.923558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.324 [2024-11-20 07:33:25.923564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.324 [2024-11-20 07:33:25.923570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.324 [2024-11-20 07:33:25.924124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.324 [2024-11-20 07:33:25.979579] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:51.324 [2024-11-20 07:33:25.979833] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:51.895 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:51.895 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:32:51.895 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:51.895 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.895 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:51.895 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.895 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:52.156 [2024-11-20 07:33:26.788581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:52.156 ************************************ 00:32:52.156 START TEST lvs_grow_clean 00:32:52.156 ************************************ 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:52.156 07:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:52.418 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:52.418 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:52.678 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f32ddcbe-3570-40e5-8e6b-120d131a288b 00:32:52.678 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f32ddcbe-3570-40e5-8e6b-120d131a288b 00:32:52.678 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:52.678 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:52.678 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:52.678 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f32ddcbe-3570-40e5-8e6b-120d131a288b lvol 150 00:32:52.939 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7494c624-5d53-4ab8-ac61-098c760acef8 00:32:52.939 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:52.939 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:53.200 [2024-11-20 07:33:27.796474] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:53.200 [2024-11-20 07:33:27.796542] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:53.200 true 00:32:53.200 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f32ddcbe-3570-40e5-8e6b-120d131a288b 00:32:53.200 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:53.461 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:53.461 07:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:53.461 07:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7494c624-5d53-4ab8-ac61-098c760acef8 00:32:53.721 07:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:53.721 [2024-11-20 07:33:28.464745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:53.721 07:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:53.982 07:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1532746 00:32:53.982 07:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:53.982 07:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:53.982 07:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1532746 /var/tmp/bdevperf.sock 00:32:53.982 07:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 1532746 ']' 00:32:53.982 07:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:53.982 07:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:53.982 07:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:53.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:53.982 07:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:53.982 07:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:53.982 [2024-11-20 07:33:28.702981] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:32:53.982 [2024-11-20 07:33:28.703041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1532746 ] 00:32:54.242 [2024-11-20 07:33:28.798766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.242 [2024-11-20 07:33:28.836954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.813 07:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:54.813 07:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:32:54.813 07:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:55.073 Nvme0n1 00:32:55.073 07:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:55.333 [ 00:32:55.333 { 00:32:55.333 "name": "Nvme0n1", 00:32:55.333 "aliases": [ 00:32:55.333 "7494c624-5d53-4ab8-ac61-098c760acef8" 00:32:55.333 ], 00:32:55.333 "product_name": "NVMe disk", 00:32:55.333 "block_size": 4096, 00:32:55.333 "num_blocks": 38912, 00:32:55.333 "uuid": "7494c624-5d53-4ab8-ac61-098c760acef8", 00:32:55.333 "numa_id": 0, 00:32:55.333 "assigned_rate_limits": { 00:32:55.333 "rw_ios_per_sec": 0, 00:32:55.333 "rw_mbytes_per_sec": 0, 00:32:55.333 "r_mbytes_per_sec": 0, 00:32:55.333 "w_mbytes_per_sec": 0 00:32:55.333 }, 00:32:55.333 "claimed": false, 00:32:55.333 "zoned": false, 00:32:55.333 "supported_io_types": { 00:32:55.333 "read": true, 00:32:55.333 "write": true, 00:32:55.333 "unmap": true, 00:32:55.333 "flush": true, 00:32:55.333 "reset": true, 00:32:55.333 "nvme_admin": true, 00:32:55.333 "nvme_io": true, 00:32:55.333 "nvme_io_md": false, 00:32:55.333 "write_zeroes": true, 00:32:55.333 "zcopy": false, 00:32:55.333 "get_zone_info": false, 00:32:55.333 "zone_management": false, 00:32:55.333 "zone_append": false, 00:32:55.333 "compare": true, 00:32:55.333 "compare_and_write": true, 00:32:55.333 "abort": true, 00:32:55.333 "seek_hole": false, 00:32:55.333 "seek_data": false, 00:32:55.333 "copy": true, 00:32:55.333 "nvme_iov_md": false 00:32:55.333 }, 00:32:55.333 "memory_domains": [ 00:32:55.333 { 00:32:55.333 "dma_device_id": "system", 00:32:55.333 "dma_device_type": 1 00:32:55.333 } 00:32:55.333 ], 00:32:55.333 "driver_specific": { 00:32:55.333 "nvme": [ 00:32:55.333 { 00:32:55.333 "trid": { 00:32:55.333 "trtype": "TCP", 00:32:55.333 "adrfam": "IPv4", 00:32:55.333 "traddr": "10.0.0.2", 00:32:55.333 "trsvcid": "4420", 00:32:55.333 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:55.333 }, 00:32:55.333 "ctrlr_data": { 00:32:55.333 "cntlid": 1, 00:32:55.333 "vendor_id": "0x8086", 00:32:55.334 "model_number": "SPDK bdev Controller", 00:32:55.334 "serial_number": "SPDK0", 00:32:55.334 "firmware_revision": "25.01", 00:32:55.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:55.334 "oacs": { 00:32:55.334 "security": 0, 00:32:55.334 "format": 0, 00:32:55.334 "firmware": 0, 00:32:55.334 "ns_manage": 0 00:32:55.334 }, 00:32:55.334 "multi_ctrlr": true, 00:32:55.334 "ana_reporting": false 00:32:55.334 }, 00:32:55.334 "vs": { 00:32:55.334 "nvme_version": "1.3" 00:32:55.334 }, 00:32:55.334 "ns_data": { 00:32:55.334 "id": 1, 00:32:55.334 "can_share": true 00:32:55.334 } 00:32:55.334 } 00:32:55.334 ], 00:32:55.334 "mp_policy": "active_passive" 00:32:55.334 } 00:32:55.334 } 00:32:55.334 ] 00:32:55.334 07:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1533017 00:32:55.334 07:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:55.334 07:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:55.334 Running I/O for 10 seconds... 00:32:56.718 Latency(us) 00:32:56.718 [2024-11-20T06:33:31.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:56.719 Nvme0n1 : 1.00 17355.00 67.79 0.00 0.00 0.00 0.00 0.00 00:32:56.719 [2024-11-20T06:33:31.486Z] =================================================================================================================== 00:32:56.719 [2024-11-20T06:33:31.486Z] Total : 17355.00 67.79 0.00 0.00 0.00 0.00 0.00 00:32:56.719 00:32:57.289 07:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f32ddcbe-3570-40e5-8e6b-120d131a288b 00:32:57.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:57.550 Nvme0n1 : 2.00 17437.50 68.12 0.00 0.00 0.00 0.00 0.00 00:32:57.550 [2024-11-20T06:33:32.317Z] =================================================================================================================== 00:32:57.550 [2024-11-20T06:33:32.317Z] Total : 17437.50 68.12 0.00 0.00 0.00 0.00 0.00 00:32:57.550 00:32:57.550 true 00:32:57.550 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f32ddcbe-3570-40e5-8e6b-120d131a288b 00:32:57.550 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:57.812 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:57.812 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:57.812 07:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1533017 00:32:58.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:58.382 Nvme0n1 : 3.00 17470.33 68.24 0.00 0.00 0.00 0.00 0.00 00:32:58.382 [2024-11-20T06:33:33.150Z] =================================================================================================================== 00:32:58.383 [2024-11-20T06:33:33.150Z] Total : 17470.33 68.24 0.00 0.00 0.00 0.00 0.00 00:32:58.383 00:32:59.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:59.325 Nvme0n1 : 4.00 17498.75 68.35 0.00 0.00 0.00 0.00 0.00 00:32:59.325 [2024-11-20T06:33:34.092Z] =================================================================================================================== 00:32:59.325 [2024-11-20T06:33:34.092Z] Total : 17498.75 68.35 0.00 0.00 0.00 0.00 0.00 00:32:59.325 00:33:00.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:00.707 Nvme0n1 : 5.00 17525.40 68.46 0.00 0.00 0.00 0.00 0.00 00:33:00.707 [2024-11-20T06:33:35.474Z] =================================================================================================================== 00:33:00.707 [2024-11-20T06:33:35.474Z] Total : 17525.40 68.46 0.00 0.00 0.00 0.00 0.00 00:33:00.707 00:33:01.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:01.646 Nvme0n1 : 6.00 17545.83 68.54 0.00 0.00 0.00 0.00 0.00 00:33:01.646 [2024-11-20T06:33:36.413Z] =================================================================================================================== 00:33:01.646 [2024-11-20T06:33:36.413Z] Total : 17545.83 68.54 0.00 0.00 0.00 0.00 0.00 00:33:01.646 00:33:02.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:02.587 Nvme0n1 : 7.00 17562.71 68.60 0.00 0.00 0.00 0.00 0.00 00:33:02.587 [2024-11-20T06:33:37.354Z] =================================================================================================================== 00:33:02.587 [2024-11-20T06:33:37.354Z] Total : 17562.71 68.60 0.00 0.00 0.00 0.00 0.00 00:33:02.587 00:33:03.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:03.528 Nvme0n1 : 8.00 17577.38 68.66 0.00 0.00 0.00 0.00 0.00 00:33:03.528 [2024-11-20T06:33:38.295Z] =================================================================================================================== 00:33:03.528 [2024-11-20T06:33:38.295Z] Total : 17577.38 68.66 0.00 0.00 0.00 0.00 0.00 00:33:03.528 00:33:04.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:04.468 Nvme0n1 : 9.00 17590.56 68.71 0.00 0.00 0.00 0.00 0.00 00:33:04.468 [2024-11-20T06:33:39.235Z] =================================================================================================================== 00:33:04.468 [2024-11-20T06:33:39.235Z] Total : 17590.56 68.71 0.00 0.00 0.00 0.00 0.00 00:33:04.468 00:33:05.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:05.411 Nvme0n1 : 10.00 17597.90 68.74 0.00 0.00 0.00 0.00 0.00 00:33:05.411 [2024-11-20T06:33:40.178Z] =================================================================================================================== 00:33:05.411 [2024-11-20T06:33:40.178Z] Total : 17597.90 68.74 0.00 0.00 0.00 0.00 0.00 00:33:05.411 00:33:05.411 00:33:05.411 Latency(us) 00:33:05.411 [2024-11-20T06:33:40.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:05.411 Nvme0n1 : 10.01 17597.55 68.74 0.00 0.00 7268.89 2280.11 10048.85 00:33:05.411 [2024-11-20T06:33:40.178Z] =================================================================================================================== 00:33:05.411 [2024-11-20T06:33:40.178Z] Total : 17597.55 68.74 0.00 0.00 7268.89 2280.11 10048.85 00:33:05.411 { 00:33:05.411 "results": [ 00:33:05.411 { 00:33:05.411 "job": "Nvme0n1", 00:33:05.411 "core_mask": "0x2", 00:33:05.411 "workload": "randwrite", 00:33:05.411 "status": "finished", 00:33:05.411 "queue_depth": 128, 00:33:05.411 "io_size": 4096, 00:33:05.411 "runtime": 10.006563, 00:33:05.411 "iops": 17597.55072745757, 00:33:05.411 "mibps": 68.74043252913113, 00:33:05.411 "io_failed": 0, 00:33:05.411 "io_timeout": 0, 00:33:05.411 "avg_latency_us": 7268.890845755888, 00:33:05.411 "min_latency_us": 2280.1066666666666, 00:33:05.411 "max_latency_us": 10048.853333333333 00:33:05.411 } 00:33:05.411 ], 00:33:05.411 "core_count": 1 00:33:05.411 } 00:33:05.411 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1532746 00:33:05.411 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 1532746 ']' 00:33:05.411 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 1532746 00:33:05.411 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:33:05.411 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:05.411 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1532746 00:33:05.671 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:05.671 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:05.671 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1532746' 00:33:05.671 killing process with pid 1532746 00:33:05.671 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 1532746 00:33:05.671 Received shutdown signal, test time was about 10.000000 seconds 00:33:05.671 00:33:05.671 Latency(us) 00:33:05.671 [2024-11-20T06:33:40.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.671 [2024-11-20T06:33:40.438Z] =================================================================================================================== 00:33:05.671 [2024-11-20T06:33:40.438Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:05.671 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 1532746 00:33:05.671 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:05.931 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:05.931 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f32ddcbe-3570-40e5-8e6b-120d131a288b 00:33:05.931 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:06.192 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:06.192 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:06.192 07:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:06.453 [2024-11-20 07:33:40.980636] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f32ddcbe-3570-40e5-8e6b-120d131a288b 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f32ddcbe-3570-40e5-8e6b-120d131a288b 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f32ddcbe-3570-40e5-8e6b-120d131a288b 00:33:06.453 request: 00:33:06.453 { 00:33:06.453 "uuid": "f32ddcbe-3570-40e5-8e6b-120d131a288b", 00:33:06.453 "method": "bdev_lvol_get_lvstores", 00:33:06.453 "req_id": 1 00:33:06.453 } 00:33:06.453 Got JSON-RPC error response 00:33:06.453 response: 00:33:06.453 { 00:33:06.453 "code": -19, 00:33:06.453 "message": "No such device" 00:33:06.453 } 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:06.453 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:06.753 aio_bdev 00:33:06.753 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7494c624-5d53-4ab8-ac61-098c760acef8 00:33:06.753 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=7494c624-5d53-4ab8-ac61-098c760acef8 00:33:06.753 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:33:06.753 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:33:06.753 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:33:06.753 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:33:06.753 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:07.040 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7494c624-5d53-4ab8-ac61-098c760acef8 -t 2000 00:33:07.040 [ 00:33:07.040 { 00:33:07.040 "name": "7494c624-5d53-4ab8-ac61-098c760acef8", 00:33:07.040 "aliases": [ 00:33:07.040 "lvs/lvol" 00:33:07.040 ], 00:33:07.040 "product_name": "Logical Volume", 00:33:07.040 "block_size": 4096, 00:33:07.040 "num_blocks": 38912, 00:33:07.040 "uuid": "7494c624-5d53-4ab8-ac61-098c760acef8", 00:33:07.040 "assigned_rate_limits": { 00:33:07.040 "rw_ios_per_sec": 0, 00:33:07.040 "rw_mbytes_per_sec": 0, 00:33:07.040 "r_mbytes_per_sec": 0, 00:33:07.040 "w_mbytes_per_sec": 0 00:33:07.040 }, 00:33:07.040 "claimed": false, 00:33:07.040 "zoned": false, 00:33:07.040 "supported_io_types": { 00:33:07.040 "read": true, 00:33:07.040 "write": true, 00:33:07.040 "unmap": true, 00:33:07.040 "flush": false, 00:33:07.040 "reset": true, 00:33:07.040 "nvme_admin": false, 00:33:07.040 "nvme_io": false, 00:33:07.040 "nvme_io_md": false, 00:33:07.040 "write_zeroes": true, 00:33:07.040 "zcopy": false, 00:33:07.040 "get_zone_info": false, 00:33:07.040 "zone_management": false, 00:33:07.040 "zone_append": false, 00:33:07.040 "compare": false, 00:33:07.040 "compare_and_write": false, 00:33:07.040 "abort": false, 00:33:07.040 "seek_hole": true, 00:33:07.040 "seek_data": true, 00:33:07.040 "copy": false, 00:33:07.040 "nvme_iov_md": false 00:33:07.040 }, 00:33:07.040 "driver_specific": { 00:33:07.040 "lvol": { 00:33:07.040 "lvol_store_uuid": "f32ddcbe-3570-40e5-8e6b-120d131a288b", 00:33:07.040 "base_bdev": "aio_bdev", 00:33:07.040 "thin_provision": false, 00:33:07.040 "num_allocated_clusters": 38, 00:33:07.040 "snapshot": false, 00:33:07.040 "clone": false, 00:33:07.040 "esnap_clone": false 00:33:07.040 } 00:33:07.040 } 00:33:07.040 } 00:33:07.040 ] 00:33:07.040 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:33:07.040 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f32ddcbe-3570-40e5-8e6b-120d131a288b 00:33:07.040 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:07.384 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:07.384 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f32ddcbe-3570-40e5-8e6b-120d131a288b 00:33:07.384 07:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:07.384 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:07.384 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7494c624-5d53-4ab8-ac61-098c760acef8 00:33:07.644 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f32ddcbe-3570-40e5-8e6b-120d131a288b 00:33:07.905 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:07.905 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:07.905 00:33:07.905 real 0m15.781s 00:33:07.905 user 0m15.400s 00:33:07.905 sys 0m1.404s 00:33:07.905 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:07.905 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:07.905 ************************************ 00:33:07.905 END TEST lvs_grow_clean 00:33:07.905 ************************************ 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:08.165 ************************************ 00:33:08.165 START TEST lvs_grow_dirty 00:33:08.165 ************************************ 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:08.165 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:08.426 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:08.426 07:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:08.426 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:08.426 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:08.426 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:08.687 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:08.687 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:08.687 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 lvol 150 00:33:08.947 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf 00:33:08.947 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:08.947 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:08.947 [2024-11-20 07:33:43.608565] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:08.947 [2024-11-20 07:33:43.608716] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:08.947 true 00:33:08.947 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:08.947 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:09.208 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:09.208 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:09.208 07:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf 00:33:09.468 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:09.729 [2024-11-20 07:33:44.269037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.729 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:09.729 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1535758 00:33:09.729 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:09.729 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:09.729 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1535758 /var/tmp/bdevperf.sock 00:33:09.729 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1535758 ']' 00:33:09.729 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:09.729 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:09.729 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:09.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:09.729 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:09.729 07:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:09.990 [2024-11-20 07:33:44.511204] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:33:09.990 [2024-11-20 07:33:44.511263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535758 ] 00:33:09.990 [2024-11-20 07:33:44.600292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.990 [2024-11-20 07:33:44.630301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.561 07:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:10.561 07:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:33:10.561 07:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:10.821 Nvme0n1 00:33:11.082 07:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:11.082 [ 00:33:11.082 { 00:33:11.082 "name": "Nvme0n1", 00:33:11.082 "aliases": [ 00:33:11.082 "0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf" 00:33:11.082 ], 00:33:11.082 "product_name": "NVMe disk", 00:33:11.082 "block_size": 4096, 00:33:11.082 "num_blocks": 38912, 00:33:11.082 "uuid": "0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf", 00:33:11.082 "numa_id": 0, 00:33:11.082 "assigned_rate_limits": { 00:33:11.082 "rw_ios_per_sec": 0, 00:33:11.082 "rw_mbytes_per_sec": 0, 00:33:11.082 "r_mbytes_per_sec": 0, 00:33:11.082 "w_mbytes_per_sec": 0 00:33:11.082 }, 00:33:11.082 "claimed": false, 00:33:11.082 "zoned": false, 00:33:11.082 "supported_io_types": { 00:33:11.082 "read": true, 00:33:11.082 "write": true, 00:33:11.082 "unmap": true, 00:33:11.082 "flush": true, 00:33:11.082 "reset": true, 00:33:11.082 "nvme_admin": true, 00:33:11.082 "nvme_io": true, 00:33:11.082 "nvme_io_md": false, 00:33:11.082 "write_zeroes": true, 00:33:11.082 "zcopy": false, 00:33:11.082 "get_zone_info": false, 00:33:11.082 "zone_management": false, 00:33:11.082 "zone_append": false, 00:33:11.082 "compare": true, 00:33:11.082 "compare_and_write": true, 00:33:11.082 "abort": true, 00:33:11.082 "seek_hole": false, 00:33:11.082 "seek_data": false, 00:33:11.082 "copy": true, 00:33:11.082 "nvme_iov_md": false 00:33:11.082 }, 00:33:11.082 "memory_domains": [ 00:33:11.082 { 00:33:11.082 "dma_device_id": "system", 00:33:11.082 "dma_device_type": 1 00:33:11.082 } 00:33:11.082 ], 00:33:11.082 "driver_specific": { 00:33:11.082 "nvme": [ 00:33:11.082 { 00:33:11.082 "trid": { 00:33:11.082 "trtype": "TCP", 00:33:11.082 "adrfam": "IPv4", 00:33:11.082 "traddr": "10.0.0.2", 00:33:11.082 "trsvcid": "4420", 00:33:11.082 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:11.082 }, 00:33:11.082 "ctrlr_data": { 00:33:11.082 "cntlid": 1, 00:33:11.082 "vendor_id": "0x8086", 00:33:11.082 "model_number": "SPDK bdev Controller", 00:33:11.082 "serial_number": "SPDK0", 00:33:11.082 "firmware_revision": "25.01", 00:33:11.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:11.082 "oacs": { 00:33:11.082 "security": 0, 00:33:11.082 "format": 0, 00:33:11.082 "firmware": 0, 00:33:11.082 "ns_manage": 0 00:33:11.082 }, 00:33:11.082 "multi_ctrlr": true, 00:33:11.082 "ana_reporting": false 00:33:11.082 }, 00:33:11.082 "vs": { 00:33:11.082 "nvme_version": "1.3" 00:33:11.082 }, 00:33:11.082 "ns_data": { 00:33:11.082 "id": 1, 00:33:11.082 "can_share": true 00:33:11.082 } 00:33:11.082 } 00:33:11.082 ], 00:33:11.082 "mp_policy": "active_passive" 00:33:11.082 } 00:33:11.082 } 00:33:11.082 ] 00:33:11.082 07:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1536088 00:33:11.082 07:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:11.082 07:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:11.342 Running I/O for 10 seconds... 00:33:12.284 Latency(us) 00:33:12.284 [2024-11-20T06:33:47.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:12.284 Nvme0n1 : 1.00 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:33:12.284 [2024-11-20T06:33:47.051Z] =================================================================================================================== 00:33:12.284 [2024-11-20T06:33:47.051Z] Total : 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:33:12.284 00:33:13.234 07:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:13.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:13.235 Nvme0n1 : 2.00 17843.50 69.70 0.00 0.00 0.00 0.00 0.00 00:33:13.235 [2024-11-20T06:33:48.002Z] =================================================================================================================== 00:33:13.235 [2024-11-20T06:33:48.002Z] Total : 17843.50 69.70 0.00 0.00 0.00 0.00 0.00 00:33:13.235 00:33:13.235 true 00:33:13.235 07:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:13.235 07:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:13.495 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:13.495 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:13.495 07:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1536088 00:33:14.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:14.434 Nvme0n1 : 3.00 17907.00 69.95 0.00 0.00 0.00 0.00 0.00 00:33:14.434 [2024-11-20T06:33:49.201Z] =================================================================================================================== 00:33:14.434 [2024-11-20T06:33:49.201Z] Total : 17907.00 69.95 0.00 0.00 0.00 0.00 0.00 00:33:14.434 00:33:15.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:15.386 Nvme0n1 : 4.00 17938.75 70.07 0.00 0.00 0.00 0.00 0.00 00:33:15.386 [2024-11-20T06:33:50.153Z] =================================================================================================================== 00:33:15.386 [2024-11-20T06:33:50.153Z] Total : 17938.75 70.07 0.00 0.00 0.00 0.00 0.00 00:33:15.386 00:33:16.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:16.330 Nvme0n1 : 5.00 17957.80 70.15 0.00 0.00 0.00 0.00 0.00 00:33:16.330 [2024-11-20T06:33:51.097Z] =================================================================================================================== 00:33:16.330 [2024-11-20T06:33:51.097Z] Total : 17957.80 70.15 0.00 0.00 0.00 0.00 0.00 00:33:16.330 00:33:17.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:17.273 Nvme0n1 : 6.00 17970.50 70.20 0.00 0.00 0.00 0.00 0.00 00:33:17.273 [2024-11-20T06:33:52.040Z] =================================================================================================================== 00:33:17.273 [2024-11-20T06:33:52.040Z] Total : 17970.50 70.20 0.00 0.00 0.00 0.00 0.00 00:33:17.273 00:33:18.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:18.217 Nvme0n1 : 7.00 17988.71 70.27 0.00 0.00 0.00 0.00 0.00 00:33:18.217 [2024-11-20T06:33:52.984Z] =================================================================================================================== 00:33:18.217 [2024-11-20T06:33:52.984Z] Total : 17988.71 70.27 0.00 0.00 0.00 0.00 0.00 00:33:18.217 00:33:19.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:19.161 Nvme0n1 : 8.00 18002.25 70.32 0.00 0.00 0.00 0.00 0.00 00:33:19.161 [2024-11-20T06:33:53.928Z] =================================================================================================================== 00:33:19.161 [2024-11-20T06:33:53.928Z] Total : 18002.25 70.32 0.00 0.00 0.00 0.00 0.00 00:33:19.161 00:33:20.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:20.545 Nvme0n1 : 9.00 18019.89 70.39 0.00 0.00 0.00 0.00 0.00 00:33:20.545 [2024-11-20T06:33:55.312Z] =================================================================================================================== 00:33:20.545 [2024-11-20T06:33:55.312Z] Total : 18019.89 70.39 0.00 0.00 0.00 0.00 0.00 00:33:20.545 00:33:21.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:21.118 Nvme0n1 : 10.00 18021.30 70.40 0.00 0.00 0.00 0.00 0.00 00:33:21.118 [2024-11-20T06:33:55.885Z] =================================================================================================================== 00:33:21.118 [2024-11-20T06:33:55.885Z] Total : 18021.30 70.40 0.00 0.00 0.00 0.00 0.00 00:33:21.118 00:33:21.118 00:33:21.118 Latency(us) 00:33:21.118 [2024-11-20T06:33:55.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:21.118 Nvme0n1 : 10.00 18027.03 70.42 0.00 0.00 7097.51 5843.63 14417.92 00:33:21.118 [2024-11-20T06:33:55.885Z] =================================================================================================================== 00:33:21.118 [2024-11-20T06:33:55.885Z] Total : 18027.03 70.42 0.00 0.00 7097.51 5843.63 14417.92 00:33:21.118 { 00:33:21.118 "results": [ 00:33:21.118 { 00:33:21.118 "job": "Nvme0n1", 00:33:21.118 "core_mask": "0x2", 00:33:21.118 "workload": "randwrite", 00:33:21.118 "status": "finished", 00:33:21.118 "queue_depth": 128, 00:33:21.118 "io_size": 4096, 00:33:21.118 "runtime": 10.003923, 00:33:21.118 "iops": 18027.027996916808, 00:33:21.118 "mibps": 70.41807811295628, 00:33:21.118 "io_failed": 0, 00:33:21.118 "io_timeout": 0, 00:33:21.118 "avg_latency_us": 7097.511669559334, 00:33:21.118 "min_latency_us": 5843.626666666667, 00:33:21.118 "max_latency_us": 14417.92 00:33:21.118 } 00:33:21.118 ], 00:33:21.118 "core_count": 1 00:33:21.118 } 00:33:21.380 07:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1535758 00:33:21.380 07:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 1535758 ']' 00:33:21.380 07:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 1535758 00:33:21.380 07:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:33:21.380 07:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:21.380 07:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1535758 00:33:21.380 07:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:21.380 07:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:21.380 07:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1535758' 00:33:21.380 killing process with pid 1535758 00:33:21.380 07:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 1535758 00:33:21.380 Received shutdown signal, test time was about 10.000000 seconds 00:33:21.380 00:33:21.380 Latency(us) 00:33:21.380 [2024-11-20T06:33:56.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.380 [2024-11-20T06:33:56.147Z] =================================================================================================================== 00:33:21.380 [2024-11-20T06:33:56.147Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:21.380 07:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 1535758 00:33:21.380 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:21.641 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:21.902 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:21.902 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:21.902 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:21.902 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:21.902 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1532286 00:33:21.902 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1532286 00:33:21.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1532286 Killed "${NVMF_APP[@]}" "$@" 00:33:21.902 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:21.902 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:21.903 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:21.903 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:21.903 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:22.165 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1538107 00:33:22.165 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1538107 00:33:22.165 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:22.165 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1538107 ']' 00:33:22.165 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:22.165 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:22.165 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:22.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:22.165 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:22.165 07:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:22.165 [2024-11-20 07:33:56.725367] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:22.165 [2024-11-20 07:33:56.726855] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:33:22.165 [2024-11-20 07:33:56.726930] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:22.165 [2024-11-20 07:33:56.815752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.165 [2024-11-20 07:33:56.852628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:22.165 [2024-11-20 07:33:56.852664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:22.165 [2024-11-20 07:33:56.852671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:22.165 [2024-11-20 07:33:56.852678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:22.165 [2024-11-20 07:33:56.852684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:22.165 [2024-11-20 07:33:56.853241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.165 [2024-11-20 07:33:56.908932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:22.165 [2024-11-20 07:33:56.909178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:23.110 [2024-11-20 07:33:57.707813] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:23.110 [2024-11-20 07:33:57.707933] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:23.110 [2024-11-20 07:33:57.707966] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:33:23.110 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:23.371 07:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf -t 2000 00:33:23.371 [ 00:33:23.371 { 00:33:23.371 "name": "0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf", 00:33:23.371 "aliases": [ 00:33:23.371 "lvs/lvol" 00:33:23.371 ], 00:33:23.371 "product_name": "Logical Volume", 00:33:23.371 "block_size": 4096, 00:33:23.371 "num_blocks": 38912, 00:33:23.371 "uuid": "0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf", 00:33:23.371 "assigned_rate_limits": { 00:33:23.371 "rw_ios_per_sec": 0, 00:33:23.371 "rw_mbytes_per_sec": 0, 00:33:23.371 "r_mbytes_per_sec": 0, 00:33:23.371 "w_mbytes_per_sec": 0 00:33:23.371 }, 00:33:23.371 "claimed": false, 00:33:23.371 "zoned": false, 00:33:23.371 "supported_io_types": { 00:33:23.371 "read": true, 00:33:23.371 "write": true, 00:33:23.371 "unmap": true, 00:33:23.371 "flush": false, 00:33:23.371 "reset": true, 00:33:23.371 "nvme_admin": false, 00:33:23.371 "nvme_io": false, 00:33:23.371 "nvme_io_md": false, 00:33:23.371 "write_zeroes": true, 00:33:23.371 "zcopy": false, 00:33:23.371 "get_zone_info": false, 00:33:23.371 "zone_management": false, 00:33:23.371 "zone_append": false, 00:33:23.371 "compare": false, 00:33:23.371 "compare_and_write": false, 00:33:23.371 "abort": false, 00:33:23.371 "seek_hole": true, 00:33:23.371 "seek_data": true, 00:33:23.371 "copy": false, 00:33:23.371 "nvme_iov_md": false 00:33:23.371 }, 00:33:23.371 "driver_specific": { 00:33:23.371 "lvol": { 00:33:23.371 "lvol_store_uuid": "aa232a0b-6c76-43d9-96d8-7f3be57ef289", 00:33:23.371 "base_bdev": "aio_bdev", 00:33:23.371 "thin_provision": false, 00:33:23.371 "num_allocated_clusters": 38, 00:33:23.371 "snapshot": false, 00:33:23.371 "clone": false, 00:33:23.371 "esnap_clone": false 00:33:23.371 } 00:33:23.371 } 00:33:23.371 } 00:33:23.371 ] 00:33:23.371 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:33:23.371 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:23.371 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:23.633 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:23.633 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:23.633 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:23.894 [2024-11-20 07:33:58.589635] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:23.894 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:24.156 request: 00:33:24.156 { 00:33:24.156 "uuid": "aa232a0b-6c76-43d9-96d8-7f3be57ef289", 00:33:24.156 "method": "bdev_lvol_get_lvstores", 00:33:24.156 "req_id": 1 00:33:24.156 } 00:33:24.156 Got JSON-RPC error response 00:33:24.156 response: 00:33:24.156 { 00:33:24.156 "code": -19, 00:33:24.156 "message": "No such device" 00:33:24.156 } 00:33:24.156 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:33:24.156 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:24.156 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:24.156 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:24.156 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:24.417 aio_bdev 00:33:24.417 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf 00:33:24.417 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf 00:33:24.417 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:33:24.417 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:33:24.417 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:33:24.417 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:33:24.417 07:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:24.417 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf -t 2000 00:33:24.677 [ 00:33:24.677 { 00:33:24.677 "name": "0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf", 00:33:24.677 "aliases": [ 00:33:24.677 "lvs/lvol" 00:33:24.677 ], 00:33:24.677 "product_name": "Logical Volume", 00:33:24.677 "block_size": 4096, 00:33:24.677 "num_blocks": 38912, 00:33:24.677 "uuid": "0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf", 00:33:24.677 "assigned_rate_limits": { 00:33:24.677 "rw_ios_per_sec": 0, 00:33:24.677 "rw_mbytes_per_sec": 0, 00:33:24.677 "r_mbytes_per_sec": 0, 00:33:24.677 "w_mbytes_per_sec": 0 00:33:24.677 }, 00:33:24.677 "claimed": false, 00:33:24.677 "zoned": false, 00:33:24.677 "supported_io_types": { 00:33:24.677 "read": true, 00:33:24.678 "write": true, 00:33:24.678 "unmap": true, 00:33:24.678 "flush": false, 00:33:24.678 "reset": true, 00:33:24.678 "nvme_admin": false, 00:33:24.678 "nvme_io": false, 00:33:24.678 "nvme_io_md": false, 00:33:24.678 "write_zeroes": true, 00:33:24.678 "zcopy": false, 00:33:24.678 "get_zone_info": false, 00:33:24.678 "zone_management": false, 00:33:24.678 "zone_append": false, 00:33:24.678 "compare": false, 00:33:24.678 "compare_and_write": false, 00:33:24.678 "abort": false, 00:33:24.678 "seek_hole": true, 00:33:24.678 "seek_data": true, 00:33:24.678 "copy": false, 00:33:24.678 "nvme_iov_md": false 00:33:24.678 }, 00:33:24.678 "driver_specific": { 00:33:24.678 "lvol": { 00:33:24.678 "lvol_store_uuid": "aa232a0b-6c76-43d9-96d8-7f3be57ef289", 00:33:24.678 "base_bdev": "aio_bdev", 00:33:24.678 "thin_provision": false, 00:33:24.678 "num_allocated_clusters": 38, 00:33:24.678 "snapshot": false, 00:33:24.678 "clone": false, 00:33:24.678 "esnap_clone": false 00:33:24.678 } 00:33:24.678 } 00:33:24.678 } 00:33:24.678 ] 00:33:24.678 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:33:24.678 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:24.678 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:24.939 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:24.939 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:24.939 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:24.939 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:24.939 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0d68e6d0-ff4a-4d4d-882e-e9fa21b795bf 00:33:25.201 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aa232a0b-6c76-43d9-96d8-7f3be57ef289 00:33:25.201 07:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:25.462 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:25.462 00:33:25.462 real 0m17.439s 00:33:25.462 user 0m35.301s 00:33:25.462 sys 0m3.009s 00:33:25.462 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:25.462 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:25.462 ************************************ 00:33:25.463 END TEST lvs_grow_dirty 00:33:25.463 ************************************ 00:33:25.463 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:25.463 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:33:25.463 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:33:25.463 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:33:25.463 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:25.463 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:33:25.463 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:33:25.463 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:33:25.463 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:25.463 nvmf_trace.0 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:25.724 rmmod nvme_tcp 00:33:25.724 rmmod nvme_fabrics 00:33:25.724 rmmod nvme_keyring 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1538107 ']' 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1538107 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 1538107 ']' 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 1538107 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1538107 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1538107' 00:33:25.724 killing process with pid 1538107 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 1538107 00:33:25.724 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 1538107 00:33:25.985 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:25.985 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:25.985 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:25.985 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:25.985 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:25.985 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:25.985 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:25.985 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:25.985 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:25.985 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.985 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.985 07:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.899 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:27.899 00:33:27.899 real 0m45.685s 00:33:27.899 user 0m53.894s 00:33:27.899 sys 0m11.389s 00:33:27.899 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:27.899 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:27.899 ************************************ 00:33:27.899 END TEST nvmf_lvs_grow 00:33:27.899 ************************************ 00:33:27.899 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:27.899 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:27.899 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:27.899 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:28.161 ************************************ 00:33:28.161 START TEST nvmf_bdev_io_wait 00:33:28.161 ************************************ 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:28.162 * Looking for test storage... 00:33:28.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:28.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.162 --rc genhtml_branch_coverage=1 00:33:28.162 --rc genhtml_function_coverage=1 00:33:28.162 --rc genhtml_legend=1 00:33:28.162 --rc geninfo_all_blocks=1 00:33:28.162 --rc geninfo_unexecuted_blocks=1 00:33:28.162 00:33:28.162 ' 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:28.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.162 --rc genhtml_branch_coverage=1 00:33:28.162 --rc genhtml_function_coverage=1 00:33:28.162 --rc genhtml_legend=1 00:33:28.162 --rc geninfo_all_blocks=1 00:33:28.162 --rc geninfo_unexecuted_blocks=1 00:33:28.162 00:33:28.162 ' 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:28.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.162 --rc genhtml_branch_coverage=1 00:33:28.162 --rc genhtml_function_coverage=1 00:33:28.162 --rc genhtml_legend=1 00:33:28.162 --rc geninfo_all_blocks=1 00:33:28.162 --rc geninfo_unexecuted_blocks=1 00:33:28.162 00:33:28.162 ' 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:28.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.162 --rc genhtml_branch_coverage=1 00:33:28.162 --rc genhtml_function_coverage=1 00:33:28.162 --rc genhtml_legend=1 00:33:28.162 --rc geninfo_all_blocks=1 00:33:28.162 --rc geninfo_unexecuted_blocks=1 00:33:28.162 00:33:28.162 ' 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:28.162 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:28.163 07:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.309 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.309 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:36.309 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:36.309 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:36.309 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:36.309 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:36.309 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:36.310 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:36.310 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:36.310 Found net devices under 0000:31:00.0: cvl_0_0 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:36.310 Found net devices under 0000:31:00.1: cvl_0_1 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.310 07:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.310 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.310 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.310 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:36.310 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:36.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:33:36.573 00:33:36.573 --- 10.0.0.2 ping statistics --- 00:33:36.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.573 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:33:36.573 00:33:36.573 --- 10.0.0.1 ping statistics --- 00:33:36.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.573 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1543514 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1543514 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 1543514 ']' 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:36.573 07:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.573 [2024-11-20 07:34:11.273252] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:36.573 [2024-11-20 07:34:11.274747] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:33:36.573 [2024-11-20 07:34:11.274816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.834 [2024-11-20 07:34:11.369699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:36.834 [2024-11-20 07:34:11.412039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.834 [2024-11-20 07:34:11.412077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.834 [2024-11-20 07:34:11.412085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.834 [2024-11-20 07:34:11.412092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.835 [2024-11-20 07:34:11.412098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.835 [2024-11-20 07:34:11.413897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.835 [2024-11-20 07:34:11.414023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:36.835 [2024-11-20 07:34:11.414175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.835 [2024-11-20 07:34:11.414175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:36.835 [2024-11-20 07:34:11.414439] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:37.406 [2024-11-20 07:34:12.144252] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:37.406 [2024-11-20 07:34:12.144619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:37.406 [2024-11-20 07:34:12.145444] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:37.406 [2024-11-20 07:34:12.145521] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.406 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:37.407 [2024-11-20 07:34:12.154961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.407 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.407 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:37.407 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.407 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:37.668 Malloc0 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:37.668 [2024-11-20 07:34:12.214781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1543772 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1543774 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:37.668 { 00:33:37.668 "params": { 00:33:37.668 "name": "Nvme$subsystem", 00:33:37.668 "trtype": "$TEST_TRANSPORT", 00:33:37.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.668 "adrfam": "ipv4", 00:33:37.668 "trsvcid": "$NVMF_PORT", 00:33:37.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.668 "hdgst": ${hdgst:-false}, 00:33:37.668 "ddgst": ${ddgst:-false} 00:33:37.668 }, 00:33:37.668 "method": "bdev_nvme_attach_controller" 00:33:37.668 } 00:33:37.668 EOF 00:33:37.668 )") 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1543777 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1543781 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:37.668 { 00:33:37.668 "params": { 00:33:37.668 "name": "Nvme$subsystem", 00:33:37.668 "trtype": "$TEST_TRANSPORT", 00:33:37.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.668 "adrfam": "ipv4", 00:33:37.668 "trsvcid": "$NVMF_PORT", 00:33:37.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.668 "hdgst": ${hdgst:-false}, 00:33:37.668 "ddgst": ${ddgst:-false} 00:33:37.668 }, 00:33:37.668 "method": "bdev_nvme_attach_controller" 00:33:37.668 } 00:33:37.668 EOF 00:33:37.668 )") 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:37.668 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:37.669 { 00:33:37.669 "params": { 00:33:37.669 "name": "Nvme$subsystem", 00:33:37.669 "trtype": "$TEST_TRANSPORT", 00:33:37.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.669 "adrfam": "ipv4", 00:33:37.669 "trsvcid": "$NVMF_PORT", 00:33:37.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.669 "hdgst": ${hdgst:-false}, 00:33:37.669 "ddgst": ${ddgst:-false} 00:33:37.669 }, 00:33:37.669 "method": "bdev_nvme_attach_controller" 00:33:37.669 } 00:33:37.669 EOF 00:33:37.669 )") 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:37.669 { 00:33:37.669 "params": { 00:33:37.669 "name": "Nvme$subsystem", 00:33:37.669 "trtype": "$TEST_TRANSPORT", 00:33:37.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.669 "adrfam": "ipv4", 00:33:37.669 "trsvcid": "$NVMF_PORT", 00:33:37.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.669 "hdgst": ${hdgst:-false}, 00:33:37.669 "ddgst": ${ddgst:-false} 00:33:37.669 }, 00:33:37.669 "method": "bdev_nvme_attach_controller" 00:33:37.669 } 00:33:37.669 EOF 00:33:37.669 )") 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1543772 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:37.669 "params": { 00:33:37.669 "name": "Nvme1", 00:33:37.669 "trtype": "tcp", 00:33:37.669 "traddr": "10.0.0.2", 00:33:37.669 "adrfam": "ipv4", 00:33:37.669 "trsvcid": "4420", 00:33:37.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:37.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:37.669 "hdgst": false, 00:33:37.669 "ddgst": false 00:33:37.669 }, 00:33:37.669 "method": "bdev_nvme_attach_controller" 00:33:37.669 }' 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:37.669 "params": { 00:33:37.669 "name": "Nvme1", 00:33:37.669 "trtype": "tcp", 00:33:37.669 "traddr": "10.0.0.2", 00:33:37.669 "adrfam": "ipv4", 00:33:37.669 "trsvcid": "4420", 00:33:37.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:37.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:37.669 "hdgst": false, 00:33:37.669 "ddgst": false 00:33:37.669 }, 00:33:37.669 "method": "bdev_nvme_attach_controller" 00:33:37.669 }' 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:37.669 "params": { 00:33:37.669 "name": "Nvme1", 00:33:37.669 "trtype": "tcp", 00:33:37.669 "traddr": "10.0.0.2", 00:33:37.669 "adrfam": "ipv4", 00:33:37.669 "trsvcid": "4420", 00:33:37.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:37.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:37.669 "hdgst": false, 00:33:37.669 "ddgst": false 00:33:37.669 }, 00:33:37.669 "method": "bdev_nvme_attach_controller" 00:33:37.669 }' 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:37.669 07:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:37.669 "params": { 00:33:37.669 "name": "Nvme1", 00:33:37.669 "trtype": "tcp", 00:33:37.669 "traddr": "10.0.0.2", 00:33:37.669 "adrfam": "ipv4", 00:33:37.669 "trsvcid": "4420", 00:33:37.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:37.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:37.669 "hdgst": false, 00:33:37.669 "ddgst": false 00:33:37.669 }, 00:33:37.669 "method": "bdev_nvme_attach_controller" 00:33:37.669 }' 00:33:37.669 [2024-11-20 07:34:12.269837] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:33:37.669 [2024-11-20 07:34:12.269900] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:37.669 [2024-11-20 07:34:12.272323] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:33:37.669 [2024-11-20 07:34:12.272371] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:37.669 [2024-11-20 07:34:12.272871] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:33:37.669 [2024-11-20 07:34:12.272918] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:37.669 [2024-11-20 07:34:12.274105] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:33:37.669 [2024-11-20 07:34:12.274151] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:37.930 [2024-11-20 07:34:12.437980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.930 [2024-11-20 07:34:12.467094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:37.930 [2024-11-20 07:34:12.495555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.930 [2024-11-20 07:34:12.525391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:37.930 [2024-11-20 07:34:12.542430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.930 [2024-11-20 07:34:12.570746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:37.930 [2024-11-20 07:34:12.588685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.930 [2024-11-20 07:34:12.616671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:37.930 Running I/O for 1 seconds... 00:33:38.190 Running I/O for 1 seconds... 00:33:38.190 Running I/O for 1 seconds... 00:33:38.190 Running I/O for 1 seconds... 00:33:39.134 12127.00 IOPS, 47.37 MiB/s 00:33:39.134 Latency(us) 00:33:39.134 [2024-11-20T06:34:13.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.134 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:39.134 Nvme1n1 : 1.01 12191.84 47.62 0.00 0.00 10463.51 4751.36 12888.75 00:33:39.134 [2024-11-20T06:34:13.901Z] =================================================================================================================== 00:33:39.134 [2024-11-20T06:34:13.901Z] Total : 12191.84 47.62 0.00 0.00 10463.51 4751.36 12888.75 00:33:39.134 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1543774 00:33:39.134 11640.00 IOPS, 45.47 MiB/s 00:33:39.134 Latency(us) 00:33:39.134 [2024-11-20T06:34:13.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.134 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:39.134 Nvme1n1 : 1.01 11691.06 45.67 0.00 0.00 10910.67 4751.36 14090.24 00:33:39.134 [2024-11-20T06:34:13.901Z] =================================================================================================================== 00:33:39.134 [2024-11-20T06:34:13.901Z] Total : 11691.06 45.67 0.00 0.00 10910.67 4751.36 14090.24 00:33:39.134 20013.00 IOPS, 78.18 MiB/s 00:33:39.134 Latency(us) 00:33:39.134 [2024-11-20T06:34:13.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.134 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:39.134 Nvme1n1 : 1.01 20100.59 78.52 0.00 0.00 6355.19 2048.00 10977.28 00:33:39.134 [2024-11-20T06:34:13.901Z] =================================================================================================================== 00:33:39.134 [2024-11-20T06:34:13.901Z] Total : 20100.59 78.52 0.00 0.00 6355.19 2048.00 10977.28 00:33:39.134 188152.00 IOPS, 734.97 MiB/s 00:33:39.134 Latency(us) 00:33:39.134 [2024-11-20T06:34:13.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.134 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:39.134 Nvme1n1 : 1.00 187780.68 733.52 0.00 0.00 677.81 300.37 1966.08 00:33:39.134 [2024-11-20T06:34:13.901Z] =================================================================================================================== 00:33:39.134 [2024-11-20T06:34:13.901Z] Total : 187780.68 733.52 0.00 0.00 677.81 300.37 1966.08 00:33:39.134 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1543777 00:33:39.395 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1543781 00:33:39.395 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:39.395 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.395 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:39.395 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.395 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:39.395 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:39.395 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:39.395 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:39.395 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:39.396 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:39.396 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:39.396 07:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:39.396 rmmod nvme_tcp 00:33:39.396 rmmod nvme_fabrics 00:33:39.396 rmmod nvme_keyring 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1543514 ']' 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1543514 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 1543514 ']' 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 1543514 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1543514 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1543514' 00:33:39.396 killing process with pid 1543514 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 1543514 00:33:39.396 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 1543514 00:33:39.657 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:39.657 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:39.657 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:39.657 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:39.657 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:39.657 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:39.657 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:39.657 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:39.657 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:39.657 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.657 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.657 07:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.571 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:41.571 00:33:41.571 real 0m13.606s 00:33:41.571 user 0m15.055s 00:33:41.571 sys 0m8.121s 00:33:41.571 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:41.571 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:41.571 ************************************ 00:33:41.571 END TEST nvmf_bdev_io_wait 00:33:41.571 ************************************ 00:33:41.571 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:41.571 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:41.571 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:41.571 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:41.833 ************************************ 00:33:41.833 START TEST nvmf_queue_depth 00:33:41.833 ************************************ 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:41.833 * Looking for test storage... 00:33:41.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:41.833 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:41.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.833 --rc genhtml_branch_coverage=1 00:33:41.833 --rc genhtml_function_coverage=1 00:33:41.833 --rc genhtml_legend=1 00:33:41.834 --rc geninfo_all_blocks=1 00:33:41.834 --rc geninfo_unexecuted_blocks=1 00:33:41.834 00:33:41.834 ' 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:41.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.834 --rc genhtml_branch_coverage=1 00:33:41.834 --rc genhtml_function_coverage=1 00:33:41.834 --rc genhtml_legend=1 00:33:41.834 --rc geninfo_all_blocks=1 00:33:41.834 --rc geninfo_unexecuted_blocks=1 00:33:41.834 00:33:41.834 ' 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:41.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.834 --rc genhtml_branch_coverage=1 00:33:41.834 --rc genhtml_function_coverage=1 00:33:41.834 --rc genhtml_legend=1 00:33:41.834 --rc geninfo_all_blocks=1 00:33:41.834 --rc geninfo_unexecuted_blocks=1 00:33:41.834 00:33:41.834 ' 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:41.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.834 --rc genhtml_branch_coverage=1 00:33:41.834 --rc genhtml_function_coverage=1 00:33:41.834 --rc genhtml_legend=1 00:33:41.834 --rc geninfo_all_blocks=1 00:33:41.834 --rc geninfo_unexecuted_blocks=1 00:33:41.834 00:33:41.834 ' 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.834 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.096 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:42.096 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:42.096 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:42.096 07:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:50.252 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:50.252 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:50.252 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:50.253 Found net devices under 0000:31:00.0: cvl_0_0 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:50.253 Found net devices under 0000:31:00.1: cvl_0_1 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:50.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:50.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:33:50.253 00:33:50.253 --- 10.0.0.2 ping statistics --- 00:33:50.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.253 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:50.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:33:50.253 00:33:50.253 --- 10.0.0.1 ping statistics --- 00:33:50.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.253 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1548712 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1548712 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1548712 ']' 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:50.253 07:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:50.253 [2024-11-20 07:34:24.868449] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:50.253 [2024-11-20 07:34:24.869427] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:33:50.253 [2024-11-20 07:34:24.869464] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.253 [2024-11-20 07:34:24.974122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.253 [2024-11-20 07:34:25.009452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.253 [2024-11-20 07:34:25.009484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.253 [2024-11-20 07:34:25.009492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:50.253 [2024-11-20 07:34:25.009498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:50.253 [2024-11-20 07:34:25.009504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.253 [2024-11-20 07:34:25.010059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.514 [2024-11-20 07:34:25.065889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:50.514 [2024-11-20 07:34:25.066140] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:51.086 [2024-11-20 07:34:25.718788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:51.086 Malloc0 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:51.086 [2024-11-20 07:34:25.786952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1548938 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1548938 /var/tmp/bdevperf.sock 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1548938 ']' 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:51.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:51.086 07:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:51.086 [2024-11-20 07:34:25.842152] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:33:51.086 [2024-11-20 07:34:25.842204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548938 ] 00:33:51.347 [2024-11-20 07:34:25.921988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.347 [2024-11-20 07:34:25.960565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.918 07:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:51.918 07:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:33:51.918 07:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:51.918 07:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.918 07:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:52.179 NVMe0n1 00:33:52.179 07:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.179 07:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:52.179 Running I/O for 10 seconds... 00:33:54.066 8244.00 IOPS, 32.20 MiB/s [2024-11-20T06:34:30.214Z] 8704.00 IOPS, 34.00 MiB/s [2024-11-20T06:34:31.156Z] 8876.67 IOPS, 34.67 MiB/s [2024-11-20T06:34:32.097Z] 9142.75 IOPS, 35.71 MiB/s [2024-11-20T06:34:33.038Z] 9650.40 IOPS, 37.70 MiB/s [2024-11-20T06:34:34.026Z] 10078.50 IOPS, 39.37 MiB/s [2024-11-20T06:34:35.026Z] 10392.14 IOPS, 40.59 MiB/s [2024-11-20T06:34:35.969Z] 10624.50 IOPS, 41.50 MiB/s [2024-11-20T06:34:36.912Z] 10804.00 IOPS, 42.20 MiB/s [2024-11-20T06:34:36.912Z] 10941.90 IOPS, 42.74 MiB/s 00:34:02.145 Latency(us) 00:34:02.145 [2024-11-20T06:34:36.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.145 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:02.145 Verification LBA range: start 0x0 length 0x4000 00:34:02.145 NVMe0n1 : 10.07 10965.46 42.83 0.00 0.00 93014.97 24576.00 79080.11 00:34:02.145 [2024-11-20T06:34:36.912Z] =================================================================================================================== 00:34:02.145 [2024-11-20T06:34:36.912Z] Total : 10965.46 42.83 0.00 0.00 93014.97 24576.00 79080.11 00:34:02.145 { 00:34:02.145 "results": [ 00:34:02.145 { 00:34:02.145 "job": "NVMe0n1", 00:34:02.145 "core_mask": "0x1", 00:34:02.145 "workload": "verify", 00:34:02.145 "status": "finished", 00:34:02.145 "verify_range": { 00:34:02.145 "start": 0, 00:34:02.145 "length": 16384 00:34:02.145 }, 00:34:02.145 "queue_depth": 1024, 00:34:02.145 "io_size": 4096, 00:34:02.145 "runtime": 10.066974, 00:34:02.145 "iops": 10965.459928673701, 00:34:02.145 "mibps": 42.833827846381645, 00:34:02.145 "io_failed": 0, 00:34:02.145 "io_timeout": 0, 00:34:02.145 "avg_latency_us": 93014.96713476887, 00:34:02.145 "min_latency_us": 24576.0, 00:34:02.145 "max_latency_us": 79080.10666666667 00:34:02.145 } 00:34:02.145 ], 00:34:02.145 "core_count": 1 00:34:02.145 } 00:34:02.407 07:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1548938 00:34:02.407 07:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1548938 ']' 00:34:02.407 07:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1548938 00:34:02.407 07:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:34:02.407 07:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:02.407 07:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1548938 00:34:02.407 07:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:02.407 07:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:02.407 07:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1548938' 00:34:02.407 killing process with pid 1548938 00:34:02.407 07:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1548938 00:34:02.407 Received shutdown signal, test time was about 10.000000 seconds 00:34:02.407 00:34:02.407 Latency(us) 00:34:02.407 [2024-11-20T06:34:37.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.407 [2024-11-20T06:34:37.174Z] =================================================================================================================== 00:34:02.407 [2024-11-20T06:34:37.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:02.407 07:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1548938 00:34:02.407 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:02.407 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:02.407 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:02.407 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:02.407 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:02.407 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:02.407 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:02.407 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:02.407 rmmod nvme_tcp 00:34:02.407 rmmod nvme_fabrics 00:34:02.407 rmmod nvme_keyring 00:34:02.407 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1548712 ']' 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1548712 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1548712 ']' 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1548712 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1548712 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1548712' 00:34:02.668 killing process with pid 1548712 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1548712 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1548712 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:02.668 07:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:05.214 00:34:05.214 real 0m23.078s 00:34:05.214 user 0m24.786s 00:34:05.214 sys 0m7.802s 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:05.214 ************************************ 00:34:05.214 END TEST nvmf_queue_depth 00:34:05.214 ************************************ 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:05.214 ************************************ 00:34:05.214 START TEST nvmf_target_multipath 00:34:05.214 ************************************ 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:05.214 * Looking for test storage... 00:34:05.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:05.214 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:05.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.215 --rc genhtml_branch_coverage=1 00:34:05.215 --rc genhtml_function_coverage=1 00:34:05.215 --rc genhtml_legend=1 00:34:05.215 --rc geninfo_all_blocks=1 00:34:05.215 --rc geninfo_unexecuted_blocks=1 00:34:05.215 00:34:05.215 ' 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:05.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.215 --rc genhtml_branch_coverage=1 00:34:05.215 --rc genhtml_function_coverage=1 00:34:05.215 --rc genhtml_legend=1 00:34:05.215 --rc geninfo_all_blocks=1 00:34:05.215 --rc geninfo_unexecuted_blocks=1 00:34:05.215 00:34:05.215 ' 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:05.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.215 --rc genhtml_branch_coverage=1 00:34:05.215 --rc genhtml_function_coverage=1 00:34:05.215 --rc genhtml_legend=1 00:34:05.215 --rc geninfo_all_blocks=1 00:34:05.215 --rc geninfo_unexecuted_blocks=1 00:34:05.215 00:34:05.215 ' 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:05.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.215 --rc genhtml_branch_coverage=1 00:34:05.215 --rc genhtml_function_coverage=1 00:34:05.215 --rc genhtml_legend=1 00:34:05.215 --rc geninfo_all_blocks=1 00:34:05.215 --rc geninfo_unexecuted_blocks=1 00:34:05.215 00:34:05.215 ' 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.215 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.216 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.216 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:05.216 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:05.216 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:05.216 07:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:13.357 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:13.358 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:13.358 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:13.358 Found net devices under 0000:31:00.0: cvl_0_0 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:13.358 Found net devices under 0000:31:00.1: cvl_0_1 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:13.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:13.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:34:13.358 00:34:13.358 --- 10.0.0.2 ping statistics --- 00:34:13.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.358 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:13.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:13.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:34:13.358 00:34:13.358 --- 10.0.0.1 ping statistics --- 00:34:13.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:13.358 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:13.358 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:13.359 only one NIC for nvmf test 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:13.359 rmmod nvme_tcp 00:34:13.359 rmmod nvme_fabrics 00:34:13.359 rmmod nvme_keyring 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:13.359 07:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:15.270 00:34:15.270 real 0m10.379s 00:34:15.270 user 0m2.395s 00:34:15.270 sys 0m5.922s 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:15.270 ************************************ 00:34:15.270 END TEST nvmf_target_multipath 00:34:15.270 ************************************ 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:15.270 ************************************ 00:34:15.270 START TEST nvmf_zcopy 00:34:15.270 ************************************ 00:34:15.270 07:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:15.531 * Looking for test storage... 00:34:15.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:15.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.531 --rc genhtml_branch_coverage=1 00:34:15.531 --rc genhtml_function_coverage=1 00:34:15.531 --rc genhtml_legend=1 00:34:15.531 --rc geninfo_all_blocks=1 00:34:15.531 --rc geninfo_unexecuted_blocks=1 00:34:15.531 00:34:15.531 ' 00:34:15.531 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:15.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.531 --rc genhtml_branch_coverage=1 00:34:15.531 --rc genhtml_function_coverage=1 00:34:15.531 --rc genhtml_legend=1 00:34:15.531 --rc geninfo_all_blocks=1 00:34:15.531 --rc geninfo_unexecuted_blocks=1 00:34:15.531 00:34:15.531 ' 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:15.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.532 --rc genhtml_branch_coverage=1 00:34:15.532 --rc genhtml_function_coverage=1 00:34:15.532 --rc genhtml_legend=1 00:34:15.532 --rc geninfo_all_blocks=1 00:34:15.532 --rc geninfo_unexecuted_blocks=1 00:34:15.532 00:34:15.532 ' 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:15.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.532 --rc genhtml_branch_coverage=1 00:34:15.532 --rc genhtml_function_coverage=1 00:34:15.532 --rc genhtml_legend=1 00:34:15.532 --rc geninfo_all_blocks=1 00:34:15.532 --rc geninfo_unexecuted_blocks=1 00:34:15.532 00:34:15.532 ' 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:15.532 07:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:23.671 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:23.672 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:23.672 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:23.672 Found net devices under 0000:31:00.0: cvl_0_0 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:23.672 Found net devices under 0000:31:00.1: cvl_0_1 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:23.672 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:23.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:23.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:34:23.935 00:34:23.935 --- 10.0.0.2 ping statistics --- 00:34:23.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.935 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:23.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:23.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:34:23.935 00:34:23.935 --- 10.0.0.1 ping statistics --- 00:34:23.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.935 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1560333 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1560333 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 1560333 ']' 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:23.935 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.936 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:23.936 07:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:23.936 [2024-11-20 07:34:58.599063] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:23.936 [2024-11-20 07:34:58.600229] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:34:23.936 [2024-11-20 07:34:58.600282] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:24.196 [2024-11-20 07:34:58.707137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.197 [2024-11-20 07:34:58.756478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:24.197 [2024-11-20 07:34:58.756530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:24.197 [2024-11-20 07:34:58.756539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:24.197 [2024-11-20 07:34:58.756546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:24.197 [2024-11-20 07:34:58.756553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:24.197 [2024-11-20 07:34:58.757329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:24.197 [2024-11-20 07:34:58.834565] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:24.197 [2024-11-20 07:34:58.834834] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:24.769 [2024-11-20 07:34:59.458217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:24.769 [2024-11-20 07:34:59.486488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:24.769 malloc0 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.769 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:25.030 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.030 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:25.030 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:25.030 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:25.030 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:25.030 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:25.030 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:25.030 { 00:34:25.030 "params": { 00:34:25.030 "name": "Nvme$subsystem", 00:34:25.030 "trtype": "$TEST_TRANSPORT", 00:34:25.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:25.030 "adrfam": "ipv4", 00:34:25.030 "trsvcid": "$NVMF_PORT", 00:34:25.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:25.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:25.031 "hdgst": ${hdgst:-false}, 00:34:25.031 "ddgst": ${ddgst:-false} 00:34:25.031 }, 00:34:25.031 "method": "bdev_nvme_attach_controller" 00:34:25.031 } 00:34:25.031 EOF 00:34:25.031 )") 00:34:25.031 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:25.031 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:25.031 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:25.031 07:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:25.031 "params": { 00:34:25.031 "name": "Nvme1", 00:34:25.031 "trtype": "tcp", 00:34:25.031 "traddr": "10.0.0.2", 00:34:25.031 "adrfam": "ipv4", 00:34:25.031 "trsvcid": "4420", 00:34:25.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:25.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:25.031 "hdgst": false, 00:34:25.031 "ddgst": false 00:34:25.031 }, 00:34:25.031 "method": "bdev_nvme_attach_controller" 00:34:25.031 }' 00:34:25.031 [2024-11-20 07:34:59.588712] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:34:25.031 [2024-11-20 07:34:59.588782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1560651 ] 00:34:25.031 [2024-11-20 07:34:59.675087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.031 [2024-11-20 07:34:59.718316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.291 Running I/O for 10 seconds... 00:34:27.614 6589.00 IOPS, 51.48 MiB/s [2024-11-20T06:35:03.325Z] 6643.50 IOPS, 51.90 MiB/s [2024-11-20T06:35:04.268Z] 6645.67 IOPS, 51.92 MiB/s [2024-11-20T06:35:05.212Z] 6659.00 IOPS, 52.02 MiB/s [2024-11-20T06:35:06.153Z] 7201.20 IOPS, 56.26 MiB/s [2024-11-20T06:35:07.095Z] 7610.83 IOPS, 59.46 MiB/s [2024-11-20T06:35:08.483Z] 7903.71 IOPS, 61.75 MiB/s [2024-11-20T06:35:09.055Z] 8126.00 IOPS, 63.48 MiB/s [2024-11-20T06:35:10.440Z] 8295.78 IOPS, 64.81 MiB/s [2024-11-20T06:35:10.440Z] 8432.40 IOPS, 65.88 MiB/s 00:34:35.673 Latency(us) 00:34:35.673 [2024-11-20T06:35:10.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.673 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:35.673 Verification LBA range: start 0x0 length 0x1000 00:34:35.673 Nvme1n1 : 10.05 8401.13 65.63 0.00 0.00 15128.02 2280.11 43253.76 00:34:35.673 [2024-11-20T06:35:10.440Z] =================================================================================================================== 00:34:35.673 [2024-11-20T06:35:10.440Z] Total : 8401.13 65.63 0.00 0.00 15128.02 2280.11 43253.76 00:34:35.673 07:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1562599 00:34:35.673 07:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:35.673 07:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:35.673 07:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:35.673 07:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:35.673 07:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:35.673 07:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:35.673 07:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:35.673 07:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:35.673 { 00:34:35.673 "params": { 00:34:35.673 "name": "Nvme$subsystem", 00:34:35.673 "trtype": "$TEST_TRANSPORT", 00:34:35.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:35.673 "adrfam": "ipv4", 00:34:35.673 "trsvcid": "$NVMF_PORT", 00:34:35.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:35.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:35.673 "hdgst": ${hdgst:-false}, 00:34:35.673 "ddgst": ${ddgst:-false} 00:34:35.673 }, 00:34:35.673 "method": "bdev_nvme_attach_controller" 00:34:35.673 } 00:34:35.673 EOF 00:34:35.673 )") 00:34:35.673 [2024-11-20 07:35:10.221728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.673 [2024-11-20 07:35:10.221756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 07:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:35.674 07:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:35.674 07:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:35.674 07:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:35.674 "params": { 00:34:35.674 "name": "Nvme1", 00:34:35.674 "trtype": "tcp", 00:34:35.674 "traddr": "10.0.0.2", 00:34:35.674 "adrfam": "ipv4", 00:34:35.674 "trsvcid": "4420", 00:34:35.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:35.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:35.674 "hdgst": false, 00:34:35.674 "ddgst": false 00:34:35.674 }, 00:34:35.674 "method": "bdev_nvme_attach_controller" 00:34:35.674 }' 00:34:35.674 [2024-11-20 07:35:10.233701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.233710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.245699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.245707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.257699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.257706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.267701] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:34:35.674 [2024-11-20 07:35:10.267751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1562599 ] 00:34:35.674 [2024-11-20 07:35:10.269699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.269708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.281698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.281706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.293699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.293707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.305699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.305706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.317699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.317706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.329699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.329707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.341699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.341706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.344495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.674 [2024-11-20 07:35:10.353699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.353713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.365698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.365707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.377699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.377708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.379829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.674 [2024-11-20 07:35:10.389700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.389708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.401705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.401717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.413702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.413713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.425699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.425709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.674 [2024-11-20 07:35:10.437700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.674 [2024-11-20 07:35:10.437707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.449705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.449718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.461704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.461715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.473701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.473710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.485699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.485708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.497699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.497706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.509699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.509706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.521700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.521709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.533699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.533709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.545699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.545706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.557698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.557705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.569698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.569705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.581699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.581708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.593699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.593706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.605699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.605705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.617700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.617709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.629698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.629706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.641698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.641706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.653699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.653707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.665731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.665741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 [2024-11-20 07:35:10.677704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.677716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.935 Running I/O for 5 seconds... 00:34:35.935 [2024-11-20 07:35:10.692744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.935 [2024-11-20 07:35:10.692761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.705381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.705398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.718368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.718384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.732642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.732658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.745850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.745869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.758307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.758322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.772711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.772726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.785870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.785885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.798498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.798512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.812735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.812750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.825643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.825660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.838388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.838402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.852859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.852877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.866204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.866218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.881064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.881078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.894216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.894230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.908741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.908755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.921992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.922006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.937235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.937249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.196 [2024-11-20 07:35:10.950262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.196 [2024-11-20 07:35:10.950276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.457 [2024-11-20 07:35:10.964825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.457 [2024-11-20 07:35:10.964840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.457 [2024-11-20 07:35:10.977484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.457 [2024-11-20 07:35:10.977499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.457 [2024-11-20 07:35:10.990569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.457 [2024-11-20 07:35:10.990583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.457 [2024-11-20 07:35:11.005101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.457 [2024-11-20 07:35:11.005116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.457 [2024-11-20 07:35:11.018316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.457 [2024-11-20 07:35:11.018330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.457 [2024-11-20 07:35:11.032714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.457 [2024-11-20 07:35:11.032728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.457 [2024-11-20 07:35:11.045759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.457 [2024-11-20 07:35:11.045773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.457 [2024-11-20 07:35:11.058689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.457 [2024-11-20 07:35:11.058704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.457 [2024-11-20 07:35:11.073429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.458 [2024-11-20 07:35:11.073448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.458 [2024-11-20 07:35:11.086758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.458 [2024-11-20 07:35:11.086772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.458 [2024-11-20 07:35:11.100691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.458 [2024-11-20 07:35:11.100707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.458 [2024-11-20 07:35:11.113620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.458 [2024-11-20 07:35:11.113635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.458 [2024-11-20 07:35:11.127008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.458 [2024-11-20 07:35:11.127023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.458 [2024-11-20 07:35:11.140833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.458 [2024-11-20 07:35:11.140847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.458 [2024-11-20 07:35:11.153664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.458 [2024-11-20 07:35:11.153678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.458 [2024-11-20 07:35:11.166369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.458 [2024-11-20 07:35:11.166383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.458 [2024-11-20 07:35:11.180945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.458 [2024-11-20 07:35:11.180960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.458 [2024-11-20 07:35:11.194086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.458 [2024-11-20 07:35:11.194100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.458 [2024-11-20 07:35:11.208765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.458 [2024-11-20 07:35:11.208779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.458 [2024-11-20 07:35:11.221386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.458 [2024-11-20 07:35:11.221400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.719 [2024-11-20 07:35:11.234802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.719 [2024-11-20 07:35:11.234816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.719 [2024-11-20 07:35:11.248802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.719 [2024-11-20 07:35:11.248817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.719 [2024-11-20 07:35:11.261927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.719 [2024-11-20 07:35:11.261943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.719 [2024-11-20 07:35:11.275037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.719 [2024-11-20 07:35:11.275052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.719 [2024-11-20 07:35:11.289095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.719 [2024-11-20 07:35:11.289109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.719 [2024-11-20 07:35:11.302033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.302047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.720 [2024-11-20 07:35:11.316806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.316821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.720 [2024-11-20 07:35:11.329928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.329952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.720 [2024-11-20 07:35:11.342429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.342443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.720 [2024-11-20 07:35:11.356742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.356756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.720 [2024-11-20 07:35:11.369767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.369781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.720 [2024-11-20 07:35:11.382800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.382814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.720 [2024-11-20 07:35:11.397478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.397493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.720 [2024-11-20 07:35:11.410726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.410741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.720 [2024-11-20 07:35:11.424709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.424723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.720 [2024-11-20 07:35:11.437578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.437593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.720 [2024-11-20 07:35:11.450048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.450063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.720 [2024-11-20 07:35:11.465113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.465128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.720 [2024-11-20 07:35:11.478029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.720 [2024-11-20 07:35:11.478043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.981 [2024-11-20 07:35:11.492695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.981 [2024-11-20 07:35:11.492710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.981 [2024-11-20 07:35:11.505476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.981 [2024-11-20 07:35:11.505491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.518619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.518633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.532524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.532539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.545923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.545937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.558715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.558729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.572907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.572922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.585958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.585984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.598919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.598933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.612906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.612921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.625897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.625912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.638596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.638610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.652990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.653005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.666338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.666352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.680784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.680799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 19036.00 IOPS, 148.72 MiB/s [2024-11-20T06:35:11.749Z] [2024-11-20 07:35:11.693840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.693855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.706824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.706840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.720885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.720900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.982 [2024-11-20 07:35:11.733905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.982 [2024-11-20 07:35:11.733920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.746708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.746723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.761215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.761230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.774432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.774446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.788893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.788908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.802247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.802261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.816900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.816915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.829865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.829879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.842605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.842619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.856685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.856699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.869669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.869683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.882558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.882573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.896615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.896629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.909842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.909858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.242 [2024-11-20 07:35:11.923067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.242 [2024-11-20 07:35:11.923082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.243 [2024-11-20 07:35:11.937167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.243 [2024-11-20 07:35:11.937183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.243 [2024-11-20 07:35:11.950378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.243 [2024-11-20 07:35:11.950392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.243 [2024-11-20 07:35:11.965424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.243 [2024-11-20 07:35:11.965439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.243 [2024-11-20 07:35:11.978604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.243 [2024-11-20 07:35:11.978619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.243 [2024-11-20 07:35:11.993070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.243 [2024-11-20 07:35:11.993085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.243 [2024-11-20 07:35:12.006412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.243 [2024-11-20 07:35:12.006427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.503 [2024-11-20 07:35:12.020672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.503 [2024-11-20 07:35:12.020687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.033733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.033748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.046762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.046777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.060899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.060914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.074030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.074045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.088936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.088950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.102087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.102101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.116857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.116878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.129808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.129823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.143411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.143425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.157135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.157150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.170233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.170247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.185211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.185225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.198238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.198252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.213030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.213044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.225952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.225967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.239125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.239139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.253162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.253176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.504 [2024-11-20 07:35:12.266402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.504 [2024-11-20 07:35:12.266416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.281271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.281285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.294196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.294210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.309118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.309132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.322371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.322385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.337215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.337230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.350085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.350098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.364292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.364307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.377416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.377430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.390994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.391009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.405086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.405101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.418073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.418087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.432957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.432971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.446108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.446122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.460867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.460882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.473833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.473847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.486498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.486512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.500667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.500682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.513539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.513553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.766 [2024-11-20 07:35:12.527065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.766 [2024-11-20 07:35:12.527080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.541244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.541259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.554651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.554665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.568690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.568705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.581840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.581855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.594401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.594415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.609113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.609128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.622401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.622416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.636857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.636877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.649715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.649729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.662962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.662977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.676848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.676866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.689997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.690012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 19021.50 IOPS, 148.61 MiB/s [2024-11-20T06:35:12.794Z] [2024-11-20 07:35:12.704707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.704722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.718006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.718020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.733301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.733316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.746038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.746052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.760716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.760730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.773716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.773730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.027 [2024-11-20 07:35:12.787080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.027 [2024-11-20 07:35:12.787095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.800783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.800797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.813637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.813652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.826637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.826651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.840877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.840891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.854081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.854095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.868569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.868588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.881577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.881591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.894456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.894470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.909039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.909054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.922087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.922101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.936789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.936804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.949821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.949836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.962265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.962279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.976683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.976698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:12.989650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:12.989664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:13.002359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:13.002373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:13.017584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:13.017599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:13.030417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:13.030431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.288 [2024-11-20 07:35:13.045211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.288 [2024-11-20 07:35:13.045226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.058277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.058292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.072765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.072779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.086025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.086039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.100819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.100834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.113723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.113737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.126213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.126232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.141335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.141349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.154499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.154513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.168699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.168714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.181939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.181954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.194342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.194356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.209003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.209018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.221747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.221762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.234732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.234746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.249343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.249358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.262579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.262594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.277438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.277453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.290947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.290962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.549 [2024-11-20 07:35:13.304832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.549 [2024-11-20 07:35:13.304847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.317906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.317921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.331140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.331155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.344921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.344935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.358148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.358162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.372884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.372899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.385804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.385824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.398670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.398685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.413543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.413559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.426390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.426404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.440718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.440732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.453996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.454011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.469447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.469462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.482436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.482450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.810 [2024-11-20 07:35:13.497239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.810 [2024-11-20 07:35:13.497254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.811 [2024-11-20 07:35:13.510336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.811 [2024-11-20 07:35:13.510351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.811 [2024-11-20 07:35:13.525391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.811 [2024-11-20 07:35:13.525406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.811 [2024-11-20 07:35:13.538343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.811 [2024-11-20 07:35:13.538358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.811 [2024-11-20 07:35:13.552305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.811 [2024-11-20 07:35:13.552320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.811 [2024-11-20 07:35:13.565195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.811 [2024-11-20 07:35:13.565210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.578457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.578472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.592758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.592773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.605819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.605834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.618808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.618822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.632564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.632578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.645745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.645760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.658780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.658795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.673361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.673376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.686144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.686159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 19040.67 IOPS, 148.76 MiB/s [2024-11-20T06:35:13.840Z] [2024-11-20 07:35:13.700691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.700707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.713658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.713673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.726519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.726534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.741221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.741237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.754283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.754298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.769133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.769148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.781938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.781953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.794355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.794370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.809087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.809102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.822598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.073 [2024-11-20 07:35:13.822613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.073 [2024-11-20 07:35:13.837073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:13.837089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:13.850126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:13.850141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:13.864721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:13.864736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:13.877773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:13.877788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:13.890684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:13.890698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:13.905468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:13.905483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:13.918700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:13.918715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:13.933303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:13.933318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:13.946386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:13.946401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:13.960984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:13.960999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:13.974031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:13.974045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:13.988744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:13.988758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:14.001810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:14.001825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:14.014972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:14.014986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:14.029024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:14.029039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:14.042130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:14.042144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:14.056843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:14.056857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:14.069833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:14.069847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:14.082491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:14.082505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.334 [2024-11-20 07:35:14.096651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.334 [2024-11-20 07:35:14.096666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.596 [2024-11-20 07:35:14.109712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.596 [2024-11-20 07:35:14.109727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.596 [2024-11-20 07:35:14.122315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.596 [2024-11-20 07:35:14.122329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.596 [2024-11-20 07:35:14.136441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.596 [2024-11-20 07:35:14.136456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.596 [2024-11-20 07:35:14.149322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.596 [2024-11-20 07:35:14.149341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.162132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.162146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.176367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.176381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.189496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.189512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.202359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.202373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.216802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.216817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.230021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.230035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.244819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.244834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.257842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.257856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.270658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.270672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.285002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.285016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.298305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.298319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.313052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.313066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.325969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.325984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.338778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.338792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.597 [2024-11-20 07:35:14.352762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.597 [2024-11-20 07:35:14.352777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.365999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.366013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.381264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.381279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.394504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.394518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.408660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.408679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.421658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.421673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.434521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.434536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.448669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.448684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.461790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.461805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.474692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.474706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.489179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.489193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.502237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.502251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.517151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.517165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.530233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.530247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.544666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.544680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.557741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.557755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.571213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.859 [2024-11-20 07:35:14.571227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.859 [2024-11-20 07:35:14.584737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.860 [2024-11-20 07:35:14.584751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.860 [2024-11-20 07:35:14.597949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.860 [2024-11-20 07:35:14.597964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:39.860 [2024-11-20 07:35:14.610834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:39.860 [2024-11-20 07:35:14.610849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.121 [2024-11-20 07:35:14.624697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.121 [2024-11-20 07:35:14.624712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.121 [2024-11-20 07:35:14.637888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.121 [2024-11-20 07:35:14.637903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.121 [2024-11-20 07:35:14.651034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.121 [2024-11-20 07:35:14.651048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.121 [2024-11-20 07:35:14.664992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.121 [2024-11-20 07:35:14.665010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.121 [2024-11-20 07:35:14.678453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.121 [2024-11-20 07:35:14.678467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.121 [2024-11-20 07:35:14.692898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.121 [2024-11-20 07:35:14.692913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.121 19045.50 IOPS, 148.79 MiB/s [2024-11-20T06:35:14.888Z] [2024-11-20 07:35:14.706256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.121 [2024-11-20 07:35:14.706270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.121 [2024-11-20 07:35:14.721287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.122 [2024-11-20 07:35:14.721302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.122 [2024-11-20 07:35:14.734642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.122 [2024-11-20 07:35:14.734656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.122 [2024-11-20 07:35:14.748811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.122 [2024-11-20 07:35:14.748826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.122 [2024-11-20 07:35:14.762100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.122 [2024-11-20 07:35:14.762114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.122 [2024-11-20 07:35:14.776566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.122 [2024-11-20 07:35:14.776581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.122 [2024-11-20 07:35:14.789302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.122 [2024-11-20 07:35:14.789316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.122 [2024-11-20 07:35:14.801843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.122 [2024-11-20 07:35:14.801858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.122 [2024-11-20 07:35:14.814699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.122 [2024-11-20 07:35:14.814713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.122 [2024-11-20 07:35:14.829025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.122 [2024-11-20 07:35:14.829040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.122 [2024-11-20 07:35:14.842145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.122 [2024-11-20 07:35:14.842159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.122 [2024-11-20 07:35:14.856546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.122 [2024-11-20 07:35:14.856561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.122 [2024-11-20 07:35:14.869447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.122 [2024-11-20 07:35:14.869462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.122 [2024-11-20 07:35:14.882708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.122 [2024-11-20 07:35:14.882723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:14.896614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:14.896629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:14.909724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:14.909738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:14.922549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:14.922563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:14.937228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:14.937242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:14.950263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:14.950277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:14.965184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:14.965199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:14.978177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:14.978192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:14.993148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:14.993163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:15.006173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:15.006187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:15.020977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:15.020992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:15.033681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:15.033696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:15.046502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:15.046516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:15.061357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:15.061372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:15.074591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:15.074605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:15.088986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:15.089000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:15.102032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:15.102046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:15.116601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:15.116616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:15.129515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:15.129530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.384 [2024-11-20 07:35:15.142304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.384 [2024-11-20 07:35:15.142318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.156407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.156422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.169217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.169231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.182562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.182577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.196485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.196501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.209933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.209949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.222665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.222679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.237164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.237179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.249793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.249808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.262620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.262634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.276713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.276728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.289769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.289783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.302783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.302798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.317371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.317386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.330907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.330922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.345114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.345128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.358175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.358190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.373110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.373125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.386475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.386490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.646 [2024-11-20 07:35:15.400983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.646 [2024-11-20 07:35:15.400998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.413946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.413961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.426783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.426798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.440832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.440847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.453688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.453703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.466503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.466517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.480705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.480720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.494050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.494064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.508684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.508699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.521410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.521425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.534786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.534800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.549005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.549020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.562146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.562160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.576673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.576687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.589486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.589500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.602026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.602040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.617174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.617189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.630070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.630084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.644827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.644841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.657636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.657652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:40.908 [2024-11-20 07:35:15.670931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:40.908 [2024-11-20 07:35:15.670946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.169 [2024-11-20 07:35:15.684897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.169 [2024-11-20 07:35:15.684912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.169 19056.80 IOPS, 148.88 MiB/s [2024-11-20T06:35:15.936Z] [2024-11-20 07:35:15.697822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.169 [2024-11-20 07:35:15.697837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.169 00:34:41.169 Latency(us) 00:34:41.169 [2024-11-20T06:35:15.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.169 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:41.169 Nvme1n1 : 5.01 19057.94 148.89 0.00 0.00 6709.80 2553.17 12943.36 00:34:41.169 [2024-11-20T06:35:15.936Z] =================================================================================================================== 00:34:41.169 [2024-11-20T06:35:15.936Z] Total : 19057.94 148.89 0.00 0.00 6709.80 2553.17 12943.36 00:34:41.169 [2024-11-20 07:35:15.705704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.169 [2024-11-20 07:35:15.705719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.169 [2024-11-20 07:35:15.717702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.169 [2024-11-20 07:35:15.717714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.169 [2024-11-20 07:35:15.729706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.169 [2024-11-20 07:35:15.729718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.169 [2024-11-20 07:35:15.741704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.169 [2024-11-20 07:35:15.741716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.169 [2024-11-20 07:35:15.753702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.169 [2024-11-20 07:35:15.753713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.169 [2024-11-20 07:35:15.765699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.169 [2024-11-20 07:35:15.765707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.169 [2024-11-20 07:35:15.777699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.169 [2024-11-20 07:35:15.777708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.169 [2024-11-20 07:35:15.789698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.169 [2024-11-20 07:35:15.789706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.169 [2024-11-20 07:35:15.801702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.169 [2024-11-20 07:35:15.801714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.169 [2024-11-20 07:35:15.813699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:41.169 [2024-11-20 07:35:15.813707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:41.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1562599) - No such process 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1562599 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:41.169 delay0 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.169 07:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:41.429 [2024-11-20 07:35:15.960478] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:49.569 Initializing NVMe Controllers 00:34:49.569 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:49.569 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:49.569 Initialization complete. Launching workers. 00:34:49.569 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 265, failed: 16833 00:34:49.569 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 16986, failed to submit 112 00:34:49.569 success 16902, unsuccessful 84, failed 0 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:49.569 rmmod nvme_tcp 00:34:49.569 rmmod nvme_fabrics 00:34:49.569 rmmod nvme_keyring 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1560333 ']' 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1560333 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 1560333 ']' 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 1560333 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:49.569 07:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1560333 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1560333' 00:34:49.569 killing process with pid 1560333 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 1560333 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 1560333 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:49.569 07:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:50.512 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:50.512 00:34:50.512 real 0m35.272s 00:34:50.512 user 0m44.518s 00:34:50.512 sys 0m12.866s 00:34:50.512 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:50.512 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:50.512 ************************************ 00:34:50.512 END TEST nvmf_zcopy 00:34:50.512 ************************************ 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:50.773 ************************************ 00:34:50.773 START TEST nvmf_nmic 00:34:50.773 ************************************ 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:50.773 * Looking for test storage... 00:34:50.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:50.773 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:50.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.774 --rc genhtml_branch_coverage=1 00:34:50.774 --rc genhtml_function_coverage=1 00:34:50.774 --rc genhtml_legend=1 00:34:50.774 --rc geninfo_all_blocks=1 00:34:50.774 --rc geninfo_unexecuted_blocks=1 00:34:50.774 00:34:50.774 ' 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:50.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.774 --rc genhtml_branch_coverage=1 00:34:50.774 --rc genhtml_function_coverage=1 00:34:50.774 --rc genhtml_legend=1 00:34:50.774 --rc geninfo_all_blocks=1 00:34:50.774 --rc geninfo_unexecuted_blocks=1 00:34:50.774 00:34:50.774 ' 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:50.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.774 --rc genhtml_branch_coverage=1 00:34:50.774 --rc genhtml_function_coverage=1 00:34:50.774 --rc genhtml_legend=1 00:34:50.774 --rc geninfo_all_blocks=1 00:34:50.774 --rc geninfo_unexecuted_blocks=1 00:34:50.774 00:34:50.774 ' 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:50.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:50.774 --rc genhtml_branch_coverage=1 00:34:50.774 --rc genhtml_function_coverage=1 00:34:50.774 --rc genhtml_legend=1 00:34:50.774 --rc geninfo_all_blocks=1 00:34:50.774 --rc geninfo_unexecuted_blocks=1 00:34:50.774 00:34:50.774 ' 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:50.774 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.035 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:51.035 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:51.036 07:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:59.183 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:59.184 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:59.184 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:59.184 Found net devices under 0000:31:00.0: cvl_0_0 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:59.184 Found net devices under 0000:31:00.1: cvl_0_1 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:59.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:34:59.184 00:34:59.184 --- 10.0.0.2 ping statistics --- 00:34:59.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.184 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:59.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:34:59.184 00:34:59.184 --- 10.0.0.1 ping statistics --- 00:34:59.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.184 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:59.184 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1569630 00:34:59.185 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1569630 00:34:59.185 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:59.185 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 1569630 ']' 00:34:59.185 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.185 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:59.185 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.185 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:59.185 07:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:59.185 [2024-11-20 07:35:33.918479] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:59.185 [2024-11-20 07:35:33.919655] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:34:59.185 [2024-11-20 07:35:33.919704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.445 [2024-11-20 07:35:34.014002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:59.445 [2024-11-20 07:35:34.056166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.445 [2024-11-20 07:35:34.056205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.445 [2024-11-20 07:35:34.056214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.445 [2024-11-20 07:35:34.056221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.445 [2024-11-20 07:35:34.056227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.445 [2024-11-20 07:35:34.058082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.445 [2024-11-20 07:35:34.058208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:59.445 [2024-11-20 07:35:34.058366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.445 [2024-11-20 07:35:34.058366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:59.445 [2024-11-20 07:35:34.115300] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:59.445 [2024-11-20 07:35:34.115586] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:59.445 [2024-11-20 07:35:34.116596] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:59.445 [2024-11-20 07:35:34.116886] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:59.445 [2024-11-20 07:35:34.117044] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:00.017 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:00.017 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:35:00.017 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:00.017 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:00.017 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:00.278 [2024-11-20 07:35:34.798879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:00.278 Malloc0 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:00.278 [2024-11-20 07:35:34.887125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:00.278 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:00.279 test case1: single bdev can't be used in multiple subsystems 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:00.279 [2024-11-20 07:35:34.922747] bdev.c:8318:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:00.279 [2024-11-20 07:35:34.922769] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:00.279 [2024-11-20 07:35:34.922777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.279 request: 00:35:00.279 { 00:35:00.279 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:00.279 "namespace": { 00:35:00.279 "bdev_name": "Malloc0", 00:35:00.279 "no_auto_visible": false 00:35:00.279 }, 00:35:00.279 "method": "nvmf_subsystem_add_ns", 00:35:00.279 "req_id": 1 00:35:00.279 } 00:35:00.279 Got JSON-RPC error response 00:35:00.279 response: 00:35:00.279 { 00:35:00.279 "code": -32602, 00:35:00.279 "message": "Invalid parameters" 00:35:00.279 } 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:00.279 Adding namespace failed - expected result. 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:00.279 test case2: host connect to nvmf target in multiple paths 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:00.279 [2024-11-20 07:35:34.934870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.279 07:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:00.539 07:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:35:01.110 07:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:01.110 07:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:35:01.110 07:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:35:01.110 07:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:35:01.110 07:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:35:03.131 07:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:35:03.131 07:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:35:03.131 07:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:35:03.131 07:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:35:03.131 07:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:35:03.131 07:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:35:03.131 07:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:03.131 [global] 00:35:03.131 thread=1 00:35:03.131 invalidate=1 00:35:03.131 rw=write 00:35:03.131 time_based=1 00:35:03.131 runtime=1 00:35:03.131 ioengine=libaio 00:35:03.131 direct=1 00:35:03.131 bs=4096 00:35:03.131 iodepth=1 00:35:03.131 norandommap=0 00:35:03.131 numjobs=1 00:35:03.131 00:35:03.131 verify_dump=1 00:35:03.131 verify_backlog=512 00:35:03.131 verify_state_save=0 00:35:03.131 do_verify=1 00:35:03.131 verify=crc32c-intel 00:35:03.131 [job0] 00:35:03.131 filename=/dev/nvme0n1 00:35:03.131 Could not set queue depth (nvme0n1) 00:35:03.392 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:03.392 fio-3.35 00:35:03.392 Starting 1 thread 00:35:04.780 00:35:04.780 job0: (groupid=0, jobs=1): err= 0: pid=1570581: Wed Nov 20 07:35:39 2024 00:35:04.780 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:04.780 slat (nsec): min=25954, max=59885, avg=26504.39, stdev=1865.89 00:35:04.780 clat (usec): min=779, max=1129, avg=971.72, stdev=51.19 00:35:04.780 lat (usec): min=807, max=1155, avg=998.22, stdev=51.26 00:35:04.780 clat percentiles (usec): 00:35:04.780 | 1.00th=[ 791], 5.00th=[ 873], 10.00th=[ 922], 20.00th=[ 947], 00:35:04.780 | 30.00th=[ 955], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:35:04.780 | 70.00th=[ 996], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1057], 00:35:04.780 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1123], 99.95th=[ 1123], 00:35:04.780 | 99.99th=[ 1123] 00:35:04.780 write: IOPS=772, BW=3089KiB/s (3163kB/s)(3092KiB/1001msec); 0 zone resets 00:35:04.780 slat (nsec): min=9240, max=68936, avg=28938.96, stdev=10409.14 00:35:04.780 clat (usec): min=266, max=851, avg=592.03, stdev=94.29 00:35:04.780 lat (usec): min=279, max=885, avg=620.97, stdev=100.15 00:35:04.780 clat percentiles (usec): 00:35:04.780 | 1.00th=[ 367], 5.00th=[ 416], 10.00th=[ 465], 20.00th=[ 510], 00:35:04.780 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 619], 00:35:04.780 | 70.00th=[ 652], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 734], 00:35:04.780 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 848], 99.95th=[ 848], 00:35:04.780 | 99.99th=[ 848] 00:35:04.780 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:35:04.780 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:04.780 lat (usec) : 500=10.35%, 750=48.09%, 1000=31.83% 00:35:04.780 lat (msec) : 2=9.73% 00:35:04.780 cpu : usr=3.30%, sys=4.10%, ctx=1285, majf=0, minf=1 00:35:04.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:04.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.780 issued rwts: total=512,773,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:04.780 00:35:04.780 Run status group 0 (all jobs): 00:35:04.780 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:35:04.780 WRITE: bw=3089KiB/s (3163kB/s), 3089KiB/s-3089KiB/s (3163kB/s-3163kB/s), io=3092KiB (3166kB), run=1001-1001msec 00:35:04.780 00:35:04.780 Disk stats (read/write): 00:35:04.780 nvme0n1: ios=562/612, merge=0/0, ticks=579/295, in_queue=874, util=93.99% 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:04.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:04.780 rmmod nvme_tcp 00:35:04.780 rmmod nvme_fabrics 00:35:04.780 rmmod nvme_keyring 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1569630 ']' 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1569630 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 1569630 ']' 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 1569630 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:04.780 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1569630 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1569630' 00:35:05.041 killing process with pid 1569630 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 1569630 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 1569630 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:05.041 07:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.589 07:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:07.589 00:35:07.589 real 0m16.479s 00:35:07.589 user 0m37.690s 00:35:07.589 sys 0m8.198s 00:35:07.589 07:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:07.589 07:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:07.589 ************************************ 00:35:07.589 END TEST nvmf_nmic 00:35:07.589 ************************************ 00:35:07.589 07:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:07.589 07:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:07.589 07:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:07.589 07:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:07.589 ************************************ 00:35:07.589 START TEST nvmf_fio_target 00:35:07.589 ************************************ 00:35:07.589 07:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:07.589 * Looking for test storage... 00:35:07.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:07.589 07:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:07.589 07:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:35:07.589 07:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:07.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.589 --rc genhtml_branch_coverage=1 00:35:07.589 --rc genhtml_function_coverage=1 00:35:07.589 --rc genhtml_legend=1 00:35:07.589 --rc geninfo_all_blocks=1 00:35:07.589 --rc geninfo_unexecuted_blocks=1 00:35:07.589 00:35:07.589 ' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:07.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.589 --rc genhtml_branch_coverage=1 00:35:07.589 --rc genhtml_function_coverage=1 00:35:07.589 --rc genhtml_legend=1 00:35:07.589 --rc geninfo_all_blocks=1 00:35:07.589 --rc geninfo_unexecuted_blocks=1 00:35:07.589 00:35:07.589 ' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:07.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.589 --rc genhtml_branch_coverage=1 00:35:07.589 --rc genhtml_function_coverage=1 00:35:07.589 --rc genhtml_legend=1 00:35:07.589 --rc geninfo_all_blocks=1 00:35:07.589 --rc geninfo_unexecuted_blocks=1 00:35:07.589 00:35:07.589 ' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:07.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.589 --rc genhtml_branch_coverage=1 00:35:07.589 --rc genhtml_function_coverage=1 00:35:07.589 --rc genhtml_legend=1 00:35:07.589 --rc geninfo_all_blocks=1 00:35:07.589 --rc geninfo_unexecuted_blocks=1 00:35:07.589 00:35:07.589 ' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:07.589 07:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:15.734 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:15.734 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:15.735 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:15.735 Found net devices under 0000:31:00.0: cvl_0_0 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:15.735 Found net devices under 0000:31:00.1: cvl_0_1 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:15.735 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:15.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:15.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:35:15.997 00:35:15.997 --- 10.0.0.2 ping statistics --- 00:35:15.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.997 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:15.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:15.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:35:15.997 00:35:15.997 --- 10.0.0.1 ping statistics --- 00:35:15.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.997 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1575599 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1575599 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 1575599 ']' 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:15.997 07:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:15.997 [2024-11-20 07:35:50.698886] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:15.997 [2024-11-20 07:35:50.700046] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:35:15.997 [2024-11-20 07:35:50.700101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:16.260 [2024-11-20 07:35:50.791601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:16.260 [2024-11-20 07:35:50.834004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:16.260 [2024-11-20 07:35:50.834041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:16.260 [2024-11-20 07:35:50.834050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:16.260 [2024-11-20 07:35:50.834057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:16.260 [2024-11-20 07:35:50.834063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:16.260 [2024-11-20 07:35:50.835557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.260 [2024-11-20 07:35:50.835674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:16.260 [2024-11-20 07:35:50.835830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.260 [2024-11-20 07:35:50.835831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:16.260 [2024-11-20 07:35:50.892888] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:16.260 [2024-11-20 07:35:50.893045] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:16.260 [2024-11-20 07:35:50.894066] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:16.260 [2024-11-20 07:35:50.894933] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:16.260 [2024-11-20 07:35:50.895005] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:16.833 07:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:16.833 07:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:35:16.833 07:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:16.833 07:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:16.833 07:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:16.833 07:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:16.833 07:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:17.094 [2024-11-20 07:35:51.680325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.094 07:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:17.354 07:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:17.354 07:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:17.354 07:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:17.354 07:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:17.616 07:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:17.616 07:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:17.877 07:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:17.877 07:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:18.138 07:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:18.138 07:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:18.138 07:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:18.398 07:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:18.398 07:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:18.659 07:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:18.659 07:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:18.659 07:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:18.921 07:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:18.921 07:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:19.182 07:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:19.182 07:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:19.444 07:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:19.444 [2024-11-20 07:35:54.108474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.444 07:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:19.705 07:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:19.965 07:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:20.226 07:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:20.226 07:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:35:20.226 07:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:35:20.226 07:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:35:20.226 07:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:35:20.226 07:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:35:22.137 07:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:35:22.137 07:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:35:22.137 07:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:35:22.137 07:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:35:22.137 07:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:35:22.137 07:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:35:22.137 07:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:22.397 [global] 00:35:22.397 thread=1 00:35:22.397 invalidate=1 00:35:22.397 rw=write 00:35:22.397 time_based=1 00:35:22.397 runtime=1 00:35:22.397 ioengine=libaio 00:35:22.397 direct=1 00:35:22.397 bs=4096 00:35:22.397 iodepth=1 00:35:22.397 norandommap=0 00:35:22.397 numjobs=1 00:35:22.397 00:35:22.398 verify_dump=1 00:35:22.398 verify_backlog=512 00:35:22.398 verify_state_save=0 00:35:22.398 do_verify=1 00:35:22.398 verify=crc32c-intel 00:35:22.398 [job0] 00:35:22.398 filename=/dev/nvme0n1 00:35:22.398 [job1] 00:35:22.398 filename=/dev/nvme0n2 00:35:22.398 [job2] 00:35:22.398 filename=/dev/nvme0n3 00:35:22.398 [job3] 00:35:22.398 filename=/dev/nvme0n4 00:35:22.398 Could not set queue depth (nvme0n1) 00:35:22.398 Could not set queue depth (nvme0n2) 00:35:22.398 Could not set queue depth (nvme0n3) 00:35:22.398 Could not set queue depth (nvme0n4) 00:35:22.658 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:22.658 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:22.658 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:22.658 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:22.658 fio-3.35 00:35:22.658 Starting 4 threads 00:35:24.041 00:35:24.041 job0: (groupid=0, jobs=1): err= 0: pid=1576963: Wed Nov 20 07:35:58 2024 00:35:24.041 read: IOPS=553, BW=2214KiB/s (2267kB/s)(2216KiB/1001msec) 00:35:24.041 slat (nsec): min=6826, max=45817, avg=22619.12, stdev=8472.42 00:35:24.041 clat (usec): min=564, max=1499, avg=801.71, stdev=116.05 00:35:24.041 lat (usec): min=572, max=1525, avg=824.33, stdev=118.08 00:35:24.041 clat percentiles (usec): 00:35:24.041 | 1.00th=[ 627], 5.00th=[ 668], 10.00th=[ 693], 20.00th=[ 734], 00:35:24.041 | 30.00th=[ 766], 40.00th=[ 783], 50.00th=[ 791], 60.00th=[ 807], 00:35:24.041 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 873], 95.00th=[ 930], 00:35:24.041 | 99.00th=[ 1336], 99.50th=[ 1401], 99.90th=[ 1500], 99.95th=[ 1500], 00:35:24.041 | 99.99th=[ 1500] 00:35:24.041 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:35:24.041 slat (nsec): min=9652, max=69277, avg=26397.15, stdev=10848.21 00:35:24.041 clat (usec): min=208, max=922, avg=494.42, stdev=149.72 00:35:24.041 lat (usec): min=218, max=955, avg=520.82, stdev=154.52 00:35:24.041 clat percentiles (usec): 00:35:24.041 | 1.00th=[ 258], 5.00th=[ 285], 10.00th=[ 326], 20.00th=[ 359], 00:35:24.041 | 30.00th=[ 412], 40.00th=[ 441], 50.00th=[ 461], 60.00th=[ 482], 00:35:24.041 | 70.00th=[ 537], 80.00th=[ 652], 90.00th=[ 734], 95.00th=[ 775], 00:35:24.041 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 922], 99.95th=[ 922], 00:35:24.041 | 99.99th=[ 922] 00:35:24.041 bw ( KiB/s): min= 4096, max= 4096, per=37.70%, avg=4096.00, stdev= 0.00, samples=1 00:35:24.041 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:24.041 lat (usec) : 250=0.44%, 500=42.08%, 750=26.49%, 1000=29.47% 00:35:24.041 lat (msec) : 2=1.52% 00:35:24.041 cpu : usr=2.10%, sys=4.00%, ctx=1579, majf=0, minf=1 00:35:24.041 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.041 issued rwts: total=554,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.041 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:24.041 job1: (groupid=0, jobs=1): err= 0: pid=1576973: Wed Nov 20 07:35:58 2024 00:35:24.041 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:24.041 slat (nsec): min=24317, max=43988, avg=25331.92, stdev=2656.77 00:35:24.041 clat (usec): min=729, max=41043, avg=1473.97, stdev=3511.43 00:35:24.041 lat (usec): min=755, max=41069, avg=1499.31, stdev=3511.49 00:35:24.041 clat percentiles (usec): 00:35:24.041 | 1.00th=[ 881], 5.00th=[ 979], 10.00th=[ 1037], 20.00th=[ 1090], 00:35:24.041 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:35:24.041 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1287], 95.00th=[ 1319], 00:35:24.041 | 99.00th=[ 1434], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:24.041 | 99.99th=[41157] 00:35:24.041 write: IOPS=586, BW=2346KiB/s (2402kB/s)(2348KiB/1001msec); 0 zone resets 00:35:24.041 slat (nsec): min=9549, max=69449, avg=26066.19, stdev=11379.00 00:35:24.041 clat (usec): min=141, max=924, avg=355.83, stdev=127.98 00:35:24.042 lat (usec): min=151, max=955, avg=381.90, stdev=130.17 00:35:24.042 clat percentiles (usec): 00:35:24.042 | 1.00th=[ 188], 5.00th=[ 215], 10.00th=[ 231], 20.00th=[ 277], 00:35:24.042 | 30.00th=[ 293], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 343], 00:35:24.042 | 70.00th=[ 363], 80.00th=[ 396], 90.00th=[ 578], 95.00th=[ 668], 00:35:24.042 | 99.00th=[ 775], 99.50th=[ 783], 99.90th=[ 922], 99.95th=[ 922], 00:35:24.042 | 99.99th=[ 922] 00:35:24.042 bw ( KiB/s): min= 4096, max= 4096, per=37.70%, avg=4096.00, stdev= 0.00, samples=1 00:35:24.042 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:24.042 lat (usec) : 250=6.82%, 500=40.04%, 750=5.73%, 1000=4.00% 00:35:24.042 lat (msec) : 2=43.04%, 50=0.36% 00:35:24.042 cpu : usr=1.40%, sys=3.10%, ctx=1100, majf=0, minf=1 00:35:24.042 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.042 issued rwts: total=512,587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.042 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:24.042 job2: (groupid=0, jobs=1): err= 0: pid=1576990: Wed Nov 20 07:35:58 2024 00:35:24.042 read: IOPS=15, BW=63.7KiB/s (65.3kB/s)(64.0KiB/1004msec) 00:35:24.042 slat (nsec): min=28308, max=29376, avg=28708.75, stdev=259.59 00:35:24.042 clat (usec): min=41017, max=42039, avg=41900.88, stdev=239.48 00:35:24.042 lat (usec): min=41046, max=42068, avg=41929.59, stdev=239.41 00:35:24.042 clat percentiles (usec): 00:35:24.042 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:35:24.042 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:24.042 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:24.042 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:24.042 | 99.99th=[42206] 00:35:24.042 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:35:24.042 slat (nsec): min=9921, max=56644, avg=32709.54, stdev=10850.63 00:35:24.042 clat (usec): min=248, max=1043, avg=610.33, stdev=135.46 00:35:24.042 lat (usec): min=262, max=1081, avg=643.04, stdev=139.87 00:35:24.042 clat percentiles (usec): 00:35:24.042 | 1.00th=[ 318], 5.00th=[ 367], 10.00th=[ 400], 20.00th=[ 494], 00:35:24.042 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:35:24.042 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 807], 00:35:24.042 | 99.00th=[ 881], 99.50th=[ 963], 99.90th=[ 1045], 99.95th=[ 1045], 00:35:24.042 | 99.99th=[ 1045] 00:35:24.042 bw ( KiB/s): min= 4096, max= 4096, per=37.70%, avg=4096.00, stdev= 0.00, samples=1 00:35:24.042 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:24.042 lat (usec) : 250=0.19%, 500=20.45%, 750=61.55%, 1000=14.58% 00:35:24.042 lat (msec) : 2=0.19%, 50=3.03% 00:35:24.042 cpu : usr=1.00%, sys=2.09%, ctx=530, majf=0, minf=1 00:35:24.042 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.042 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.042 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:24.042 job3: (groupid=0, jobs=1): err= 0: pid=1576996: Wed Nov 20 07:35:58 2024 00:35:24.042 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:24.042 slat (nsec): min=27201, max=60501, avg=28599.28, stdev=2972.64 00:35:24.042 clat (usec): min=851, max=1397, avg=1150.25, stdev=95.94 00:35:24.042 lat (usec): min=879, max=1425, avg=1178.85, stdev=95.77 00:35:24.042 clat percentiles (usec): 00:35:24.042 | 1.00th=[ 906], 5.00th=[ 963], 10.00th=[ 1020], 20.00th=[ 1074], 00:35:24.042 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1188], 00:35:24.042 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1270], 95.00th=[ 1287], 00:35:24.042 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1401], 99.95th=[ 1401], 00:35:24.042 | 99.99th=[ 1401] 00:35:24.042 write: IOPS=603, BW=2414KiB/s (2472kB/s)(2416KiB/1001msec); 0 zone resets 00:35:24.042 slat (nsec): min=9549, max=56691, avg=31481.77, stdev=10964.90 00:35:24.042 clat (usec): min=242, max=1014, avg=610.63, stdev=121.19 00:35:24.042 lat (usec): min=255, max=1050, avg=642.12, stdev=126.32 00:35:24.042 clat percentiles (usec): 00:35:24.042 | 1.00th=[ 355], 5.00th=[ 404], 10.00th=[ 445], 20.00th=[ 498], 00:35:24.042 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 652], 00:35:24.042 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 799], 00:35:24.042 | 99.00th=[ 898], 99.50th=[ 938], 99.90th=[ 1012], 99.95th=[ 1012], 00:35:24.042 | 99.99th=[ 1012] 00:35:24.042 bw ( KiB/s): min= 4096, max= 4096, per=37.70%, avg=4096.00, stdev= 0.00, samples=1 00:35:24.042 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:24.042 lat (usec) : 250=0.09%, 500=10.84%, 750=37.10%, 1000=9.50% 00:35:24.042 lat (msec) : 2=42.47% 00:35:24.042 cpu : usr=1.70%, sys=5.10%, ctx=1118, majf=0, minf=1 00:35:24.042 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.042 issued rwts: total=512,604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.042 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:24.042 00:35:24.042 Run status group 0 (all jobs): 00:35:24.042 READ: bw=6351KiB/s (6503kB/s), 63.7KiB/s-2214KiB/s (65.3kB/s-2267kB/s), io=6376KiB (6529kB), run=1001-1004msec 00:35:24.042 WRITE: bw=10.6MiB/s (11.1MB/s), 2040KiB/s-4092KiB/s (2089kB/s-4190kB/s), io=10.7MiB (11.2MB), run=1001-1004msec 00:35:24.042 00:35:24.042 Disk stats (read/write): 00:35:24.042 nvme0n1: ios=562/725, merge=0/0, ticks=528/367, in_queue=895, util=90.38% 00:35:24.042 nvme0n2: ios=448/512, merge=0/0, ticks=676/160, in_queue=836, util=87.54% 00:35:24.042 nvme0n3: ios=34/512, merge=0/0, ticks=1383/247, in_queue=1630, util=96.61% 00:35:24.042 nvme0n4: ios=472/512, merge=0/0, ticks=947/256, in_queue=1203, util=96.78% 00:35:24.042 07:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:24.042 [global] 00:35:24.042 thread=1 00:35:24.042 invalidate=1 00:35:24.042 rw=randwrite 00:35:24.042 time_based=1 00:35:24.042 runtime=1 00:35:24.042 ioengine=libaio 00:35:24.042 direct=1 00:35:24.042 bs=4096 00:35:24.042 iodepth=1 00:35:24.042 norandommap=0 00:35:24.042 numjobs=1 00:35:24.042 00:35:24.042 verify_dump=1 00:35:24.042 verify_backlog=512 00:35:24.042 verify_state_save=0 00:35:24.042 do_verify=1 00:35:24.042 verify=crc32c-intel 00:35:24.042 [job0] 00:35:24.042 filename=/dev/nvme0n1 00:35:24.042 [job1] 00:35:24.042 filename=/dev/nvme0n2 00:35:24.042 [job2] 00:35:24.042 filename=/dev/nvme0n3 00:35:24.042 [job3] 00:35:24.042 filename=/dev/nvme0n4 00:35:24.042 Could not set queue depth (nvme0n1) 00:35:24.042 Could not set queue depth (nvme0n2) 00:35:24.042 Could not set queue depth (nvme0n3) 00:35:24.042 Could not set queue depth (nvme0n4) 00:35:24.303 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:24.303 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:24.303 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:24.303 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:24.303 fio-3.35 00:35:24.303 Starting 4 threads 00:35:25.690 00:35:25.690 job0: (groupid=0, jobs=1): err= 0: pid=1577404: Wed Nov 20 07:36:00 2024 00:35:25.690 read: IOPS=32, BW=131KiB/s (134kB/s)(132KiB/1006msec) 00:35:25.690 slat (nsec): min=10377, max=29271, avg=25410.94, stdev=3572.12 00:35:25.690 clat (usec): min=510, max=41836, avg=25250.98, stdev=20008.27 00:35:25.690 lat (usec): min=537, max=41863, avg=25276.39, stdev=20007.37 00:35:25.690 clat percentiles (usec): 00:35:25.690 | 1.00th=[ 510], 5.00th=[ 619], 10.00th=[ 652], 20.00th=[ 857], 00:35:25.690 | 30.00th=[ 930], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:35:25.690 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:25.690 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:25.690 | 99.99th=[41681] 00:35:25.690 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:35:25.690 slat (usec): min=9, max=12375, avg=44.73, stdev=546.13 00:35:25.690 clat (usec): min=113, max=707, avg=282.40, stdev=155.15 00:35:25.690 lat (usec): min=123, max=12997, avg=327.13, stdev=583.98 00:35:25.690 clat percentiles (usec): 00:35:25.690 | 1.00th=[ 115], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 133], 00:35:25.690 | 30.00th=[ 143], 40.00th=[ 161], 50.00th=[ 258], 60.00th=[ 302], 00:35:25.690 | 70.00th=[ 367], 80.00th=[ 429], 90.00th=[ 510], 95.00th=[ 594], 00:35:25.690 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 709], 99.95th=[ 709], 00:35:25.690 | 99.99th=[ 709] 00:35:25.690 bw ( KiB/s): min= 4096, max= 4096, per=34.69%, avg=4096.00, stdev= 0.00, samples=1 00:35:25.690 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:25.690 lat (usec) : 250=44.40%, 500=39.27%, 750=11.01%, 1000=1.47% 00:35:25.690 lat (msec) : 2=0.18%, 50=3.67% 00:35:25.690 cpu : usr=0.50%, sys=1.19%, ctx=550, majf=0, minf=1 00:35:25.690 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.690 issued rwts: total=33,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.690 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:25.690 job1: (groupid=0, jobs=1): err= 0: pid=1577405: Wed Nov 20 07:36:00 2024 00:35:25.690 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:25.690 slat (nsec): min=7590, max=46514, avg=25336.50, stdev=2744.86 00:35:25.690 clat (usec): min=499, max=1309, avg=942.11, stdev=135.25 00:35:25.690 lat (usec): min=525, max=1335, avg=967.45, stdev=135.35 00:35:25.690 clat percentiles (usec): 00:35:25.690 | 1.00th=[ 594], 5.00th=[ 701], 10.00th=[ 775], 20.00th=[ 840], 00:35:25.690 | 30.00th=[ 881], 40.00th=[ 914], 50.00th=[ 938], 60.00th=[ 971], 00:35:25.690 | 70.00th=[ 1004], 80.00th=[ 1057], 90.00th=[ 1123], 95.00th=[ 1172], 00:35:25.690 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1303], 99.95th=[ 1303], 00:35:25.690 | 99.99th=[ 1303] 00:35:25.690 write: IOPS=786, BW=3145KiB/s (3220kB/s)(3148KiB/1001msec); 0 zone resets 00:35:25.690 slat (nsec): min=9372, max=59104, avg=29591.49, stdev=8327.58 00:35:25.690 clat (usec): min=183, max=1923, avg=598.49, stdev=160.30 00:35:25.690 lat (usec): min=211, max=1956, avg=628.08, stdev=162.15 00:35:25.690 clat percentiles (usec): 00:35:25.690 | 1.00th=[ 235], 5.00th=[ 367], 10.00th=[ 404], 20.00th=[ 465], 00:35:25.690 | 30.00th=[ 510], 40.00th=[ 553], 50.00th=[ 603], 60.00th=[ 635], 00:35:25.690 | 70.00th=[ 676], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 840], 00:35:25.690 | 99.00th=[ 1012], 99.50th=[ 1045], 99.90th=[ 1926], 99.95th=[ 1926], 00:35:25.690 | 99.99th=[ 1926] 00:35:25.690 bw ( KiB/s): min= 4096, max= 4096, per=34.69%, avg=4096.00, stdev= 0.00, samples=1 00:35:25.690 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:25.690 lat (usec) : 250=0.85%, 500=16.40%, 750=37.03%, 1000=32.56% 00:35:25.690 lat (msec) : 2=13.16% 00:35:25.690 cpu : usr=2.70%, sys=3.10%, ctx=1299, majf=0, minf=1 00:35:25.690 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.690 issued rwts: total=512,787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.690 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:25.690 job2: (groupid=0, jobs=1): err= 0: pid=1577410: Wed Nov 20 07:36:00 2024 00:35:25.690 read: IOPS=498, BW=1994KiB/s (2042kB/s)(2060KiB/1033msec) 00:35:25.690 slat (nsec): min=6651, max=48939, avg=25510.38, stdev=7076.33 00:35:25.690 clat (usec): min=386, max=41990, avg=882.47, stdev=2570.21 00:35:25.690 lat (usec): min=414, max=42017, avg=907.98, stdev=2570.48 00:35:25.690 clat percentiles (usec): 00:35:25.690 | 1.00th=[ 445], 5.00th=[ 502], 10.00th=[ 562], 20.00th=[ 619], 00:35:25.690 | 30.00th=[ 660], 40.00th=[ 701], 50.00th=[ 734], 60.00th=[ 766], 00:35:25.690 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 865], 95.00th=[ 889], 00:35:25.690 | 99.00th=[ 971], 99.50th=[ 996], 99.90th=[42206], 99.95th=[42206], 00:35:25.690 | 99.99th=[42206] 00:35:25.690 write: IOPS=991, BW=3965KiB/s (4060kB/s)(4096KiB/1033msec); 0 zone resets 00:35:25.690 slat (nsec): min=8976, max=82027, avg=30282.62, stdev=9929.13 00:35:25.690 clat (usec): min=126, max=3105, avg=509.54, stdev=149.70 00:35:25.690 lat (usec): min=137, max=3117, avg=539.82, stdev=152.45 00:35:25.690 clat percentiles (usec): 00:35:25.690 | 1.00th=[ 219], 5.00th=[ 289], 10.00th=[ 326], 20.00th=[ 400], 00:35:25.690 | 30.00th=[ 445], 40.00th=[ 486], 50.00th=[ 519], 60.00th=[ 545], 00:35:25.690 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 668], 95.00th=[ 701], 00:35:25.690 | 99.00th=[ 758], 99.50th=[ 783], 99.90th=[ 1254], 99.95th=[ 3097], 00:35:25.690 | 99.99th=[ 3097] 00:35:25.690 bw ( KiB/s): min= 4096, max= 4096, per=34.69%, avg=4096.00, stdev= 0.00, samples=2 00:35:25.690 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:35:25.690 lat (usec) : 250=1.30%, 500=29.24%, 750=53.67%, 1000=15.53% 00:35:25.690 lat (msec) : 2=0.06%, 4=0.06%, 50=0.13% 00:35:25.690 cpu : usr=3.29%, sys=5.33%, ctx=1540, majf=0, minf=1 00:35:25.690 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.690 issued rwts: total=515,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.690 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:25.690 job3: (groupid=0, jobs=1): err= 0: pid=1577413: Wed Nov 20 07:36:00 2024 00:35:25.690 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:25.690 slat (nsec): min=7591, max=59416, avg=25718.19, stdev=2703.07 00:35:25.690 clat (usec): min=442, max=1329, avg=985.82, stdev=124.46 00:35:25.691 lat (usec): min=468, max=1355, avg=1011.54, stdev=124.38 00:35:25.691 clat percentiles (usec): 00:35:25.691 | 1.00th=[ 668], 5.00th=[ 775], 10.00th=[ 824], 20.00th=[ 889], 00:35:25.691 | 30.00th=[ 930], 40.00th=[ 963], 50.00th=[ 988], 60.00th=[ 1020], 00:35:25.691 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:35:25.691 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1336], 99.95th=[ 1336], 00:35:25.691 | 99.99th=[ 1336] 00:35:25.691 write: IOPS=725, BW=2901KiB/s (2971kB/s)(2904KiB/1001msec); 0 zone resets 00:35:25.691 slat (nsec): min=9511, max=60687, avg=29253.60, stdev=8504.36 00:35:25.691 clat (usec): min=192, max=1186, avg=621.69, stdev=132.01 00:35:25.691 lat (usec): min=203, max=1218, avg=650.95, stdev=134.39 00:35:25.691 clat percentiles (usec): 00:35:25.691 | 1.00th=[ 273], 5.00th=[ 400], 10.00th=[ 453], 20.00th=[ 506], 00:35:25.691 | 30.00th=[ 553], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:35:25.691 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 824], 00:35:25.691 | 99.00th=[ 914], 99.50th=[ 971], 99.90th=[ 1188], 99.95th=[ 1188], 00:35:25.691 | 99.99th=[ 1188] 00:35:25.691 bw ( KiB/s): min= 4096, max= 4096, per=34.69%, avg=4096.00, stdev= 0.00, samples=1 00:35:25.691 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:25.691 lat (usec) : 250=0.24%, 500=10.50%, 750=39.66%, 1000=30.13% 00:35:25.691 lat (msec) : 2=19.47% 00:35:25.691 cpu : usr=2.50%, sys=2.90%, ctx=1238, majf=0, minf=1 00:35:25.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.691 issued rwts: total=512,726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.691 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:25.691 00:35:25.691 Run status group 0 (all jobs): 00:35:25.691 READ: bw=6087KiB/s (6233kB/s), 131KiB/s-2046KiB/s (134kB/s-2095kB/s), io=6288KiB (6439kB), run=1001-1033msec 00:35:25.691 WRITE: bw=11.5MiB/s (12.1MB/s), 2036KiB/s-3965KiB/s (2085kB/s-4060kB/s), io=11.9MiB (12.5MB), run=1001-1033msec 00:35:25.691 00:35:25.691 Disk stats (read/write): 00:35:25.691 nvme0n1: ios=73/512, merge=0/0, ticks=1065/131, in_queue=1196, util=96.39% 00:35:25.691 nvme0n2: ios=551/546, merge=0/0, ticks=560/306, in_queue=866, util=92.35% 00:35:25.691 nvme0n3: ios=512/826, merge=0/0, ticks=304/330, in_queue=634, util=88.38% 00:35:25.691 nvme0n4: ios=515/512, merge=0/0, ticks=779/309, in_queue=1088, util=91.66% 00:35:25.691 07:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:25.691 [global] 00:35:25.691 thread=1 00:35:25.691 invalidate=1 00:35:25.691 rw=write 00:35:25.691 time_based=1 00:35:25.691 runtime=1 00:35:25.691 ioengine=libaio 00:35:25.691 direct=1 00:35:25.691 bs=4096 00:35:25.691 iodepth=128 00:35:25.691 norandommap=0 00:35:25.691 numjobs=1 00:35:25.691 00:35:25.691 verify_dump=1 00:35:25.691 verify_backlog=512 00:35:25.691 verify_state_save=0 00:35:25.691 do_verify=1 00:35:25.691 verify=crc32c-intel 00:35:25.691 [job0] 00:35:25.691 filename=/dev/nvme0n1 00:35:25.691 [job1] 00:35:25.691 filename=/dev/nvme0n2 00:35:25.691 [job2] 00:35:25.691 filename=/dev/nvme0n3 00:35:25.691 [job3] 00:35:25.691 filename=/dev/nvme0n4 00:35:25.691 Could not set queue depth (nvme0n1) 00:35:25.691 Could not set queue depth (nvme0n2) 00:35:25.691 Could not set queue depth (nvme0n3) 00:35:25.691 Could not set queue depth (nvme0n4) 00:35:25.951 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:25.951 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:25.951 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:25.951 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:25.951 fio-3.35 00:35:25.951 Starting 4 threads 00:35:27.337 00:35:27.337 job0: (groupid=0, jobs=1): err= 0: pid=1577943: Wed Nov 20 07:36:01 2024 00:35:27.337 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:35:27.337 slat (nsec): min=959, max=17222k, avg=176703.84, stdev=1193452.90 00:35:27.337 clat (usec): min=4983, max=99170, avg=19973.99, stdev=12567.31 00:35:27.337 lat (usec): min=4992, max=99177, avg=20150.69, stdev=12672.13 00:35:27.337 clat percentiles (usec): 00:35:27.337 | 1.00th=[ 6259], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[10945], 00:35:27.337 | 30.00th=[12780], 40.00th=[15139], 50.00th=[16909], 60.00th=[19530], 00:35:27.337 | 70.00th=[21890], 80.00th=[25560], 90.00th=[32900], 95.00th=[38011], 00:35:27.337 | 99.00th=[81265], 99.50th=[91751], 99.90th=[99091], 99.95th=[99091], 00:35:27.337 | 99.99th=[99091] 00:35:27.337 write: IOPS=3213, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1015msec); 0 zone resets 00:35:27.337 slat (nsec): min=1648, max=18054k, avg=136334.00, stdev=774483.10 00:35:27.337 clat (usec): min=1145, max=99145, avg=20676.34, stdev=14436.73 00:35:27.337 lat (usec): min=1156, max=99150, avg=20812.67, stdev=14495.13 00:35:27.337 clat percentiles (usec): 00:35:27.337 | 1.00th=[ 4883], 5.00th=[ 7570], 10.00th=[ 8848], 20.00th=[10945], 00:35:27.337 | 30.00th=[13960], 40.00th=[17433], 50.00th=[18482], 60.00th=[18744], 00:35:27.337 | 70.00th=[18744], 80.00th=[22414], 90.00th=[39584], 95.00th=[53216], 00:35:27.337 | 99.00th=[84411], 99.50th=[91751], 99.90th=[92799], 99.95th=[99091], 00:35:27.337 | 99.99th=[99091] 00:35:27.337 bw ( KiB/s): min= 8688, max=16384, per=13.38%, avg=12536.00, stdev=5441.89, samples=2 00:35:27.337 iops : min= 2172, max= 4096, avg=3134.00, stdev=1360.47, samples=2 00:35:27.337 lat (msec) : 2=0.03%, 4=0.21%, 10=13.56%, 20=56.85%, 50=24.99% 00:35:27.337 lat (msec) : 100=4.36% 00:35:27.337 cpu : usr=2.66%, sys=3.16%, ctx=338, majf=0, minf=1 00:35:27.337 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:35:27.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:27.337 issued rwts: total=3072,3262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.337 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:27.337 job1: (groupid=0, jobs=1): err= 0: pid=1577944: Wed Nov 20 07:36:01 2024 00:35:27.337 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:35:27.337 slat (nsec): min=954, max=16545k, avg=122949.21, stdev=886379.67 00:35:27.337 clat (usec): min=6210, max=57329, avg=15327.44, stdev=7278.87 00:35:27.337 lat (usec): min=6219, max=57340, avg=15450.39, stdev=7346.28 00:35:27.337 clat percentiles (usec): 00:35:27.337 | 1.00th=[ 7963], 5.00th=[ 8455], 10.00th=[ 9896], 20.00th=[10421], 00:35:27.337 | 30.00th=[10683], 40.00th=[11469], 50.00th=[14222], 60.00th=[15664], 00:35:27.337 | 70.00th=[17433], 80.00th=[18744], 90.00th=[22414], 95.00th=[24511], 00:35:27.337 | 99.00th=[53216], 99.50th=[55313], 99.90th=[57410], 99.95th=[57410], 00:35:27.337 | 99.99th=[57410] 00:35:27.337 write: IOPS=3875, BW=15.1MiB/s (15.9MB/s)(15.3MiB/1009msec); 0 zone resets 00:35:27.337 slat (nsec): min=1668, max=15678k, avg=139012.68, stdev=835552.62 00:35:27.337 clat (msec): min=2, max=103, avg=18.68, stdev=14.31 00:35:27.338 lat (msec): min=2, max=103, avg=18.82, stdev=14.39 00:35:27.338 clat percentiles (msec): 00:35:27.338 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:35:27.338 | 30.00th=[ 12], 40.00th=[ 17], 50.00th=[ 19], 60.00th=[ 19], 00:35:27.338 | 70.00th=[ 19], 80.00th=[ 20], 90.00th=[ 25], 95.00th=[ 43], 00:35:27.338 | 99.00th=[ 97], 99.50th=[ 99], 99.90th=[ 104], 99.95th=[ 104], 00:35:27.338 | 99.99th=[ 104] 00:35:27.338 bw ( KiB/s): min=13032, max=17224, per=16.14%, avg=15128.00, stdev=2964.19, samples=2 00:35:27.338 iops : min= 3258, max= 4306, avg=3782.00, stdev=741.05, samples=2 00:35:27.338 lat (msec) : 4=0.19%, 10=15.71%, 20=67.75%, 50=13.40%, 100=2.74% 00:35:27.338 lat (msec) : 250=0.23% 00:35:27.338 cpu : usr=2.58%, sys=4.86%, ctx=358, majf=0, minf=1 00:35:27.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:27.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:27.338 issued rwts: total=3584,3910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.338 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:27.338 job2: (groupid=0, jobs=1): err= 0: pid=1577949: Wed Nov 20 07:36:01 2024 00:35:27.338 read: IOPS=8135, BW=31.8MiB/s (33.3MB/s)(32.0MiB/1007msec) 00:35:27.338 slat (nsec): min=998, max=6948.2k, avg=59905.09, stdev=481125.99 00:35:27.338 clat (usec): min=4278, max=14910, avg=8151.44, stdev=1961.66 00:35:27.338 lat (usec): min=4287, max=14939, avg=8211.35, stdev=1994.87 00:35:27.338 clat percentiles (usec): 00:35:27.338 | 1.00th=[ 5080], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6718], 00:35:27.338 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7832], 00:35:27.338 | 70.00th=[ 8225], 80.00th=[ 9896], 90.00th=[11469], 95.00th=[12256], 00:35:27.338 | 99.00th=[13435], 99.50th=[13829], 99.90th=[14484], 99.95th=[14484], 00:35:27.338 | 99.99th=[14877] 00:35:27.338 write: IOPS=8355, BW=32.6MiB/s (34.2MB/s)(32.9MiB/1007msec); 0 zone resets 00:35:27.338 slat (nsec): min=1675, max=8165.8k, avg=54461.99, stdev=385536.45 00:35:27.338 clat (usec): min=463, max=15782, avg=7254.07, stdev=2015.59 00:35:27.338 lat (usec): min=471, max=15795, avg=7308.53, stdev=2021.97 00:35:27.338 clat percentiles (usec): 00:35:27.338 | 1.00th=[ 3097], 5.00th=[ 4359], 10.00th=[ 4621], 20.00th=[ 5080], 00:35:27.338 | 30.00th=[ 6325], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7701], 00:35:27.338 | 70.00th=[ 7898], 80.00th=[ 8160], 90.00th=[10028], 95.00th=[10552], 00:35:27.338 | 99.00th=[12911], 99.50th=[14222], 99.90th=[15270], 99.95th=[15270], 00:35:27.338 | 99.99th=[15795] 00:35:27.338 bw ( KiB/s): min=33008, max=33288, per=35.37%, avg=33148.00, stdev=197.99, samples=2 00:35:27.338 iops : min= 8252, max= 8322, avg=8287.00, stdev=49.50, samples=2 00:35:27.338 lat (usec) : 500=0.02%, 1000=0.05% 00:35:27.338 lat (msec) : 2=0.21%, 4=0.95%, 10=84.17%, 20=14.60% 00:35:27.338 cpu : usr=5.37%, sys=8.15%, ctx=591, majf=0, minf=1 00:35:27.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:27.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:27.338 issued rwts: total=8192,8414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.338 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:27.338 job3: (groupid=0, jobs=1): err= 0: pid=1577950: Wed Nov 20 07:36:01 2024 00:35:27.338 read: IOPS=7891, BW=30.8MiB/s (32.3MB/s)(31.0MiB/1005msec) 00:35:27.338 slat (nsec): min=976, max=7618.3k, avg=63546.37, stdev=528933.82 00:35:27.338 clat (usec): min=2370, max=18713, avg=8536.40, stdev=2102.67 00:35:27.338 lat (usec): min=2477, max=19795, avg=8599.95, stdev=2140.46 00:35:27.338 clat percentiles (usec): 00:35:27.338 | 1.00th=[ 4424], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 7111], 00:35:27.338 | 30.00th=[ 7439], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8291], 00:35:27.338 | 70.00th=[ 8586], 80.00th=[ 9896], 90.00th=[11863], 95.00th=[13042], 00:35:27.338 | 99.00th=[14615], 99.50th=[15139], 99.90th=[18744], 99.95th=[18744], 00:35:27.338 | 99.99th=[18744] 00:35:27.338 write: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec); 0 zone resets 00:35:27.338 slat (nsec): min=1658, max=6632.3k, avg=56158.21, stdev=402560.00 00:35:27.338 clat (usec): min=1231, max=14886, avg=7301.41, stdev=1848.23 00:35:27.338 lat (usec): min=1255, max=14888, avg=7357.57, stdev=1862.74 00:35:27.338 clat percentiles (usec): 00:35:27.338 | 1.00th=[ 3228], 5.00th=[ 4555], 10.00th=[ 4883], 20.00th=[ 5276], 00:35:27.338 | 30.00th=[ 6521], 40.00th=[ 7177], 50.00th=[ 7635], 60.00th=[ 7832], 00:35:27.338 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[10290], 95.00th=[10552], 00:35:27.338 | 99.00th=[11338], 99.50th=[11863], 99.90th=[14484], 99.95th=[14615], 00:35:27.338 | 99.99th=[14877] 00:35:27.338 bw ( KiB/s): min=32768, max=32768, per=34.97%, avg=32768.00, stdev= 0.00, samples=2 00:35:27.338 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:35:27.338 lat (msec) : 2=0.11%, 4=1.18%, 10=83.06%, 20=15.65% 00:35:27.338 cpu : usr=5.08%, sys=7.67%, ctx=588, majf=0, minf=1 00:35:27.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:27.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:27.338 issued rwts: total=7931,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.338 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:27.338 00:35:27.338 Run status group 0 (all jobs): 00:35:27.338 READ: bw=87.7MiB/s (91.9MB/s), 11.8MiB/s-31.8MiB/s (12.4MB/s-33.3MB/s), io=89.0MiB (93.3MB), run=1005-1015msec 00:35:27.338 WRITE: bw=91.5MiB/s (96.0MB/s), 12.6MiB/s-32.6MiB/s (13.2MB/s-34.2MB/s), io=92.9MiB (97.4MB), run=1005-1015msec 00:35:27.338 00:35:27.338 Disk stats (read/write): 00:35:27.338 nvme0n1: ios=2610/2631, merge=0/0, ticks=52781/47210, in_queue=99991, util=86.97% 00:35:27.338 nvme0n2: ios=3110/3079, merge=0/0, ticks=43952/59135, in_queue=103087, util=96.73% 00:35:27.338 nvme0n3: ios=6699/7042, merge=0/0, ticks=52089/48896, in_queue=100985, util=96.41% 00:35:27.338 nvme0n4: ios=6684/6754, merge=0/0, ticks=53601/47124, in_queue=100725, util=91.98% 00:35:27.338 07:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:27.338 [global] 00:35:27.338 thread=1 00:35:27.338 invalidate=1 00:35:27.338 rw=randwrite 00:35:27.338 time_based=1 00:35:27.338 runtime=1 00:35:27.338 ioengine=libaio 00:35:27.338 direct=1 00:35:27.338 bs=4096 00:35:27.338 iodepth=128 00:35:27.338 norandommap=0 00:35:27.338 numjobs=1 00:35:27.338 00:35:27.338 verify_dump=1 00:35:27.338 verify_backlog=512 00:35:27.338 verify_state_save=0 00:35:27.338 do_verify=1 00:35:27.338 verify=crc32c-intel 00:35:27.338 [job0] 00:35:27.338 filename=/dev/nvme0n1 00:35:27.338 [job1] 00:35:27.338 filename=/dev/nvme0n2 00:35:27.338 [job2] 00:35:27.338 filename=/dev/nvme0n3 00:35:27.338 [job3] 00:35:27.338 filename=/dev/nvme0n4 00:35:27.338 Could not set queue depth (nvme0n1) 00:35:27.338 Could not set queue depth (nvme0n2) 00:35:27.338 Could not set queue depth (nvme0n3) 00:35:27.338 Could not set queue depth (nvme0n4) 00:35:27.598 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:27.598 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:27.598 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:27.598 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:27.598 fio-3.35 00:35:27.598 Starting 4 threads 00:35:28.983 00:35:28.983 job0: (groupid=0, jobs=1): err= 0: pid=1578545: Wed Nov 20 07:36:03 2024 00:35:28.983 read: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec) 00:35:28.983 slat (nsec): min=1041, max=12375k, avg=75817.94, stdev=567038.62 00:35:28.983 clat (usec): min=3902, max=24378, avg=9744.50, stdev=3478.45 00:35:28.983 lat (usec): min=3908, max=25796, avg=9820.32, stdev=3519.77 00:35:28.983 clat percentiles (usec): 00:35:28.983 | 1.00th=[ 5866], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7504], 00:35:28.983 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8455], 00:35:28.983 | 70.00th=[10028], 80.00th=[12518], 90.00th=[14877], 95.00th=[17957], 00:35:28.983 | 99.00th=[20579], 99.50th=[20579], 99.90th=[23987], 99.95th=[23987], 00:35:28.983 | 99.99th=[24249] 00:35:28.983 write: IOPS=4780, BW=18.7MiB/s (19.6MB/s)(18.9MiB/1012msec); 0 zone resets 00:35:28.983 slat (nsec): min=1686, max=19919k, avg=128261.44, stdev=777839.97 00:35:28.983 clat (usec): min=1643, max=78416, avg=17212.23, stdev=17814.31 00:35:28.983 lat (usec): min=1651, max=78425, avg=17340.50, stdev=17935.07 00:35:28.983 clat percentiles (usec): 00:35:28.983 | 1.00th=[ 3326], 5.00th=[ 4555], 10.00th=[ 5735], 20.00th=[ 6980], 00:35:28.983 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 8979], 60.00th=[12387], 00:35:28.983 | 70.00th=[14615], 80.00th=[21890], 90.00th=[42206], 95.00th=[67634], 00:35:28.983 | 99.00th=[73925], 99.50th=[73925], 99.90th=[78119], 99.95th=[78119], 00:35:28.983 | 99.99th=[78119] 00:35:28.983 bw ( KiB/s): min=15152, max=22536, per=19.21%, avg=18844.00, stdev=5221.28, samples=2 00:35:28.983 iops : min= 3788, max= 5634, avg=4711.00, stdev=1305.32, samples=2 00:35:28.983 lat (msec) : 2=0.04%, 4=0.98%, 10=60.10%, 20=25.55%, 50=8.71% 00:35:28.983 lat (msec) : 100=4.62% 00:35:28.983 cpu : usr=2.97%, sys=6.43%, ctx=337, majf=0, minf=1 00:35:28.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:28.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:28.983 issued rwts: total=4608,4838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.983 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:28.983 job1: (groupid=0, jobs=1): err= 0: pid=1578546: Wed Nov 20 07:36:03 2024 00:35:28.983 read: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec) 00:35:28.983 slat (nsec): min=947, max=8718.3k, avg=77510.41, stdev=629963.86 00:35:28.983 clat (usec): min=3061, max=18091, avg=9924.75, stdev=2332.49 00:35:28.983 lat (usec): min=3071, max=21872, avg=10002.26, stdev=2391.19 00:35:28.983 clat percentiles (usec): 00:35:28.983 | 1.00th=[ 6325], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[ 8586], 00:35:28.983 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:35:28.983 | 70.00th=[ 9765], 80.00th=[10683], 90.00th=[14091], 95.00th=[15139], 00:35:28.983 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:35:28.983 | 99.99th=[18220] 00:35:28.983 write: IOPS=6982, BW=27.3MiB/s (28.6MB/s)(27.5MiB/1009msec); 0 zone resets 00:35:28.983 slat (nsec): min=1536, max=7999.7k, avg=63736.57, stdev=444953.43 00:35:28.983 clat (usec): min=1129, max=18345, avg=8800.80, stdev=2244.33 00:35:28.983 lat (usec): min=1139, max=18348, avg=8864.53, stdev=2261.65 00:35:28.983 clat percentiles (usec): 00:35:28.983 | 1.00th=[ 3392], 5.00th=[ 5604], 10.00th=[ 5866], 20.00th=[ 6980], 00:35:28.983 | 30.00th=[ 7767], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9372], 00:35:28.983 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[12125], 95.00th=[12911], 00:35:28.983 | 99.00th=[15139], 99.50th=[16909], 99.90th=[17695], 99.95th=[18220], 00:35:28.983 | 99.99th=[18220] 00:35:28.983 bw ( KiB/s): min=26672, max=28672, per=28.21%, avg=27672.00, stdev=1414.21, samples=2 00:35:28.983 iops : min= 6668, max= 7168, avg=6918.00, stdev=353.55, samples=2 00:35:28.983 lat (msec) : 2=0.13%, 4=0.71%, 10=79.25%, 20=19.91% 00:35:28.983 cpu : usr=4.76%, sys=6.35%, ctx=537, majf=0, minf=2 00:35:28.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:35:28.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:28.983 issued rwts: total=6656,7045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.983 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:28.983 job2: (groupid=0, jobs=1): err= 0: pid=1578547: Wed Nov 20 07:36:03 2024 00:35:28.983 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:35:28.983 slat (nsec): min=1068, max=16402k, avg=109872.22, stdev=788491.55 00:35:28.983 clat (usec): min=3447, max=70872, avg=12235.60, stdev=8045.01 00:35:28.983 lat (usec): min=3454, max=70880, avg=12345.47, stdev=8144.08 00:35:28.983 clat percentiles (usec): 00:35:28.983 | 1.00th=[ 6063], 5.00th=[ 6718], 10.00th=[ 7767], 20.00th=[ 8586], 00:35:28.983 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[10945], 00:35:28.983 | 70.00th=[12649], 80.00th=[14091], 90.00th=[18220], 95.00th=[20841], 00:35:28.983 | 99.00th=[56886], 99.50th=[63701], 99.90th=[70779], 99.95th=[70779], 00:35:28.983 | 99.99th=[70779] 00:35:28.983 write: IOPS=4424, BW=17.3MiB/s (18.1MB/s)(17.5MiB/1012msec); 0 zone resets 00:35:28.983 slat (nsec): min=1627, max=12030k, avg=117064.48, stdev=667177.78 00:35:28.983 clat (usec): min=1179, max=70860, avg=17479.12, stdev=15970.84 00:35:28.983 lat (usec): min=1189, max=70868, avg=17596.18, stdev=16071.47 00:35:28.983 clat percentiles (usec): 00:35:28.983 | 1.00th=[ 3654], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 7504], 00:35:28.983 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[11076], 60.00th=[13435], 00:35:28.983 | 70.00th=[14353], 80.00th=[25560], 90.00th=[47973], 95.00th=[56886], 00:35:28.983 | 99.00th=[62129], 99.50th=[62653], 99.90th=[66323], 99.95th=[66323], 00:35:28.983 | 99.99th=[70779] 00:35:28.983 bw ( KiB/s): min=13384, max=21424, per=17.74%, avg=17404.00, stdev=5685.14, samples=2 00:35:28.983 iops : min= 3346, max= 5356, avg=4351.00, stdev=1421.28, samples=2 00:35:28.983 lat (msec) : 2=0.10%, 4=0.92%, 10=50.13%, 20=34.20%, 50=9.00% 00:35:28.983 lat (msec) : 100=5.64% 00:35:28.983 cpu : usr=3.76%, sys=4.65%, ctx=334, majf=0, minf=2 00:35:28.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:28.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:28.984 issued rwts: total=4096,4478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:28.984 job3: (groupid=0, jobs=1): err= 0: pid=1578548: Wed Nov 20 07:36:03 2024 00:35:28.984 read: IOPS=8135, BW=31.8MiB/s (33.3MB/s)(32.0MiB/1007msec) 00:35:28.984 slat (nsec): min=998, max=6814.0k, avg=61795.37, stdev=482300.15 00:35:28.984 clat (usec): min=2284, max=15678, avg=8307.12, stdev=2000.75 00:35:28.984 lat (usec): min=2594, max=18718, avg=8368.92, stdev=2031.45 00:35:28.984 clat percentiles (usec): 00:35:28.984 | 1.00th=[ 3982], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6783], 00:35:28.984 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7701], 60.00th=[ 8225], 00:35:28.984 | 70.00th=[ 9110], 80.00th=[ 9896], 90.00th=[11600], 95.00th=[12387], 00:35:28.984 | 99.00th=[13566], 99.50th=[14091], 99.90th=[15008], 99.95th=[15139], 00:35:28.984 | 99.99th=[15664] 00:35:28.984 write: IOPS=8397, BW=32.8MiB/s (34.4MB/s)(33.0MiB/1007msec); 0 zone resets 00:35:28.984 slat (nsec): min=1601, max=6607.5k, avg=53707.37, stdev=400691.19 00:35:28.984 clat (usec): min=1137, max=14976, avg=7067.75, stdev=1734.53 00:35:28.984 lat (usec): min=1148, max=14994, avg=7121.46, stdev=1747.94 00:35:28.984 clat percentiles (usec): 00:35:28.984 | 1.00th=[ 3195], 5.00th=[ 4555], 10.00th=[ 4817], 20.00th=[ 5604], 00:35:28.984 | 30.00th=[ 6194], 40.00th=[ 6783], 50.00th=[ 7242], 60.00th=[ 7504], 00:35:28.984 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 9765], 95.00th=[10552], 00:35:28.984 | 99.00th=[11600], 99.50th=[12256], 99.90th=[13566], 99.95th=[13960], 00:35:28.984 | 99.99th=[15008] 00:35:28.984 bw ( KiB/s): min=32768, max=33856, per=33.96%, avg=33312.00, stdev=769.33, samples=2 00:35:28.984 iops : min= 8192, max= 8464, avg=8328.00, stdev=192.33, samples=2 00:35:28.984 lat (msec) : 2=0.09%, 4=1.56%, 10=84.94%, 20=13.41% 00:35:28.984 cpu : usr=6.66%, sys=7.06%, ctx=466, majf=0, minf=2 00:35:28.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:28.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:28.984 issued rwts: total=8192,8456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:28.984 00:35:28.984 Run status group 0 (all jobs): 00:35:28.984 READ: bw=90.9MiB/s (95.3MB/s), 15.8MiB/s-31.8MiB/s (16.6MB/s-33.3MB/s), io=92.0MiB (96.5MB), run=1007-1012msec 00:35:28.984 WRITE: bw=95.8MiB/s (100MB/s), 17.3MiB/s-32.8MiB/s (18.1MB/s-34.4MB/s), io=96.9MiB (102MB), run=1007-1012msec 00:35:28.984 00:35:28.984 Disk stats (read/write): 00:35:28.984 nvme0n1: ios=3488/3584, merge=0/0, ticks=32194/69922, in_queue=102116, util=98.40% 00:35:28.984 nvme0n2: ios=5671/5743, merge=0/0, ticks=53066/48025, in_queue=101091, util=88.18% 00:35:28.984 nvme0n3: ios=3584/4063, merge=0/0, ticks=36542/62216, in_queue=98758, util=88.38% 00:35:28.984 nvme0n4: ios=6656/7115, merge=0/0, ticks=52914/47871, in_queue=100785, util=89.52% 00:35:28.984 07:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:28.984 07:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1578876 00:35:28.984 07:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:28.984 07:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:28.984 [global] 00:35:28.984 thread=1 00:35:28.984 invalidate=1 00:35:28.984 rw=read 00:35:28.984 time_based=1 00:35:28.984 runtime=10 00:35:28.984 ioengine=libaio 00:35:28.984 direct=1 00:35:28.984 bs=4096 00:35:28.984 iodepth=1 00:35:28.984 norandommap=1 00:35:28.984 numjobs=1 00:35:28.984 00:35:28.984 [job0] 00:35:28.984 filename=/dev/nvme0n1 00:35:28.984 [job1] 00:35:28.984 filename=/dev/nvme0n2 00:35:28.984 [job2] 00:35:28.984 filename=/dev/nvme0n3 00:35:28.984 [job3] 00:35:28.984 filename=/dev/nvme0n4 00:35:28.984 Could not set queue depth (nvme0n1) 00:35:28.984 Could not set queue depth (nvme0n2) 00:35:28.984 Could not set queue depth (nvme0n3) 00:35:28.984 Could not set queue depth (nvme0n4) 00:35:29.245 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:29.245 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:29.245 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:29.245 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:29.245 fio-3.35 00:35:29.245 Starting 4 threads 00:35:31.794 07:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:32.056 07:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:32.056 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9437184, buflen=4096 00:35:32.056 fio: pid=1579069, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:32.317 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:35:32.317 fio: pid=1579068, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:32.317 07:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:32.317 07:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:32.577 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=16003072, buflen=4096 00:35:32.577 fio: pid=1579066, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:32.577 07:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:32.577 07:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:32.577 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=2031616, buflen=4096 00:35:32.577 fio: pid=1579067, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:32.577 07:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:32.578 07:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:32.578 00:35:32.578 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1579066: Wed Nov 20 07:36:07 2024 00:35:32.578 read: IOPS=1313, BW=5253KiB/s (5379kB/s)(15.3MiB/2975msec) 00:35:32.578 slat (usec): min=6, max=35769, avg=45.00, stdev=713.42 00:35:32.578 clat (usec): min=182, max=2490, avg=704.91, stdev=162.53 00:35:32.578 lat (usec): min=208, max=36660, avg=749.92, stdev=735.79 00:35:32.578 clat percentiles (usec): 00:35:32.578 | 1.00th=[ 262], 5.00th=[ 383], 10.00th=[ 445], 20.00th=[ 586], 00:35:32.578 | 30.00th=[ 660], 40.00th=[ 709], 50.00th=[ 742], 60.00th=[ 783], 00:35:32.578 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 865], 95.00th=[ 889], 00:35:32.578 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 996], 99.95th=[ 1012], 00:35:32.578 | 99.99th=[ 2507] 00:35:32.578 bw ( KiB/s): min= 5120, max= 6536, per=64.58%, avg=5534.40, stdev=585.08, samples=5 00:35:32.578 iops : min= 1280, max= 1634, avg=1383.60, stdev=146.27, samples=5 00:35:32.578 lat (usec) : 250=0.84%, 500=14.23%, 750=36.23%, 1000=48.59% 00:35:32.578 lat (msec) : 2=0.05%, 4=0.03% 00:35:32.578 cpu : usr=2.02%, sys=4.84%, ctx=3914, majf=0, minf=1 00:35:32.578 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.578 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.578 issued rwts: total=3908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.578 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:32.578 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1579067: Wed Nov 20 07:36:07 2024 00:35:32.578 read: IOPS=157, BW=627KiB/s (643kB/s)(1984KiB/3162msec) 00:35:32.578 slat (usec): min=6, max=19478, avg=122.92, stdev=1253.69 00:35:32.578 clat (usec): min=542, max=43047, avg=6199.85, stdev=13692.96 00:35:32.578 lat (usec): min=569, max=55113, avg=6322.96, stdev=13794.21 00:35:32.578 clat percentiles (usec): 00:35:32.578 | 1.00th=[ 660], 5.00th=[ 766], 10.00th=[ 832], 20.00th=[ 906], 00:35:32.578 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 955], 60.00th=[ 971], 00:35:32.578 | 70.00th=[ 996], 80.00th=[ 1037], 90.00th=[41157], 95.00th=[42206], 00:35:32.578 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:32.578 | 99.99th=[43254] 00:35:32.578 bw ( KiB/s): min= 96, max= 2614, per=6.01%, avg=515.67, stdev=1027.97, samples=6 00:35:32.578 iops : min= 24, max= 653, avg=128.83, stdev=256.79, samples=6 00:35:32.578 lat (usec) : 750=3.62%, 1000=67.20% 00:35:32.578 lat (msec) : 2=16.10%, 50=12.88% 00:35:32.578 cpu : usr=0.44%, sys=0.47%, ctx=500, majf=0, minf=2 00:35:32.578 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.578 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.578 issued rwts: total=497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.578 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:32.578 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1579068: Wed Nov 20 07:36:07 2024 00:35:32.578 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(268KiB/2795msec) 00:35:32.578 slat (usec): min=25, max=15576, avg=254.91, stdev=1885.74 00:35:32.578 clat (usec): min=855, max=42166, avg=41058.03, stdev=5004.02 00:35:32.578 lat (usec): min=892, max=56937, avg=41316.35, stdev=5364.52 00:35:32.578 clat percentiles (usec): 00:35:32.578 | 1.00th=[ 857], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:32.578 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:35:32.578 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:32.578 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:32.578 | 99.99th=[42206] 00:35:32.578 bw ( KiB/s): min= 96, max= 104, per=1.13%, avg=97.60, stdev= 3.58, samples=5 00:35:32.578 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:35:32.578 lat (usec) : 1000=1.47% 00:35:32.578 lat (msec) : 50=97.06% 00:35:32.578 cpu : usr=0.00%, sys=0.11%, ctx=69, majf=0, minf=2 00:35:32.578 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.578 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.578 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.578 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:32.578 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1579069: Wed Nov 20 07:36:07 2024 00:35:32.578 read: IOPS=874, BW=3495KiB/s (3579kB/s)(9216KiB/2637msec) 00:35:32.578 slat (nsec): min=6612, max=64322, avg=24153.67, stdev=6243.60 00:35:32.578 clat (usec): min=191, max=42091, avg=1104.83, stdev=3610.88 00:35:32.578 lat (usec): min=198, max=42118, avg=1128.98, stdev=3611.25 00:35:32.578 clat percentiles (usec): 00:35:32.578 | 1.00th=[ 465], 5.00th=[ 586], 10.00th=[ 635], 20.00th=[ 693], 00:35:32.578 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 799], 60.00th=[ 832], 00:35:32.578 | 70.00th=[ 857], 80.00th=[ 881], 90.00th=[ 914], 95.00th=[ 947], 00:35:32.578 | 99.00th=[ 1029], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:32.578 | 99.99th=[42206] 00:35:32.578 bw ( KiB/s): min= 96, max= 4976, per=42.96%, avg=3681.60, stdev=2095.62, samples=5 00:35:32.578 iops : min= 24, max= 1244, avg=920.40, stdev=523.91, samples=5 00:35:32.578 lat (usec) : 250=0.04%, 500=1.87%, 750=31.89%, 1000=64.21% 00:35:32.578 lat (msec) : 2=1.17%, 50=0.78% 00:35:32.578 cpu : usr=0.95%, sys=2.43%, ctx=2305, majf=0, minf=2 00:35:32.578 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.578 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.578 issued rwts: total=2305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.578 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:32.578 00:35:32.578 Run status group 0 (all jobs): 00:35:32.578 READ: bw=8569KiB/s (8775kB/s), 95.9KiB/s-5253KiB/s (98.2kB/s-5379kB/s), io=26.5MiB (27.7MB), run=2637-3162msec 00:35:32.578 00:35:32.578 Disk stats (read/write): 00:35:32.578 nvme0n1: ios=3763/0, merge=0/0, ticks=2255/0, in_queue=2255, util=92.05% 00:35:32.578 nvme0n2: ios=441/0, merge=0/0, ticks=2993/0, in_queue=2993, util=94.21% 00:35:32.578 nvme0n3: ios=63/0, merge=0/0, ticks=2585/0, in_queue=2585, util=95.99% 00:35:32.578 nvme0n4: ios=2303/0, merge=0/0, ticks=2448/0, in_queue=2448, util=96.42% 00:35:32.839 07:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:32.839 07:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:33.099 07:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:33.099 07:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:33.099 07:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:33.099 07:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:33.359 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:33.359 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1578876 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:33.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:33.619 nvmf hotplug test: fio failed as expected 00:35:33.619 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:33.879 rmmod nvme_tcp 00:35:33.879 rmmod nvme_fabrics 00:35:33.879 rmmod nvme_keyring 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1575599 ']' 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1575599 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 1575599 ']' 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 1575599 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:33.879 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1575599 00:35:34.138 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:34.138 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:34.138 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1575599' 00:35:34.138 killing process with pid 1575599 00:35:34.138 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 1575599 00:35:34.138 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 1575599 00:35:34.138 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:34.138 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:34.138 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:34.138 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:34.138 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:34.139 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:34.139 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:34.139 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:34.139 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:34.139 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.139 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:34.139 07:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.680 07:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:36.680 00:35:36.680 real 0m29.004s 00:35:36.680 user 2m14.958s 00:35:36.680 sys 0m13.249s 00:35:36.680 07:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:36.680 07:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:36.680 ************************************ 00:35:36.680 END TEST nvmf_fio_target 00:35:36.680 ************************************ 00:35:36.680 07:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:36.680 07:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:36.680 07:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:36.680 07:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:36.680 ************************************ 00:35:36.680 START TEST nvmf_bdevio 00:35:36.680 ************************************ 00:35:36.680 07:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:36.680 * Looking for test storage... 00:35:36.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:36.680 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:36.680 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:35:36.680 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:36.680 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:36.680 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:36.680 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:36.680 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:36.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.681 --rc genhtml_branch_coverage=1 00:35:36.681 --rc genhtml_function_coverage=1 00:35:36.681 --rc genhtml_legend=1 00:35:36.681 --rc geninfo_all_blocks=1 00:35:36.681 --rc geninfo_unexecuted_blocks=1 00:35:36.681 00:35:36.681 ' 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:36.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.681 --rc genhtml_branch_coverage=1 00:35:36.681 --rc genhtml_function_coverage=1 00:35:36.681 --rc genhtml_legend=1 00:35:36.681 --rc geninfo_all_blocks=1 00:35:36.681 --rc geninfo_unexecuted_blocks=1 00:35:36.681 00:35:36.681 ' 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:36.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.681 --rc genhtml_branch_coverage=1 00:35:36.681 --rc genhtml_function_coverage=1 00:35:36.681 --rc genhtml_legend=1 00:35:36.681 --rc geninfo_all_blocks=1 00:35:36.681 --rc geninfo_unexecuted_blocks=1 00:35:36.681 00:35:36.681 ' 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:36.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.681 --rc genhtml_branch_coverage=1 00:35:36.681 --rc genhtml_function_coverage=1 00:35:36.681 --rc genhtml_legend=1 00:35:36.681 --rc geninfo_all_blocks=1 00:35:36.681 --rc geninfo_unexecuted_blocks=1 00:35:36.681 00:35:36.681 ' 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:36.681 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:36.682 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:36.682 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:36.682 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:36.682 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.682 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:36.682 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.682 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:36.682 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:36.682 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:36.682 07:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:44.823 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:44.823 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:44.823 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:44.823 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:44.823 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:44.823 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:44.823 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:44.823 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:44.823 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:44.823 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:44.823 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:44.824 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:44.824 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:44.824 Found net devices under 0000:31:00.0: cvl_0_0 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:44.824 Found net devices under 0000:31:00.1: cvl_0_1 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:44.824 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:35:45.085 00:35:45.085 --- 10.0.0.2 ping statistics --- 00:35:45.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.085 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:35:45.085 00:35:45.085 --- 10.0.0.1 ping statistics --- 00:35:45.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.085 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:45.085 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:45.086 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:45.086 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:45.086 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1585117 00:35:45.086 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1585117 00:35:45.086 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:45.086 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 1585117 ']' 00:35:45.086 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.086 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:45.086 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.086 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:45.086 07:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:45.086 [2024-11-20 07:36:19.790718] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:45.086 [2024-11-20 07:36:19.791877] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:35:45.086 [2024-11-20 07:36:19.791928] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.345 [2024-11-20 07:36:19.903284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:45.345 [2024-11-20 07:36:19.954082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.345 [2024-11-20 07:36:19.954136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.345 [2024-11-20 07:36:19.954145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.345 [2024-11-20 07:36:19.954152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.345 [2024-11-20 07:36:19.954158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.345 [2024-11-20 07:36:19.956164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:45.345 [2024-11-20 07:36:19.956323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:45.345 [2024-11-20 07:36:19.956480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:45.345 [2024-11-20 07:36:19.956480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:45.345 [2024-11-20 07:36:20.043986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:45.345 [2024-11-20 07:36:20.045141] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:45.345 [2024-11-20 07:36:20.045332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:45.345 [2024-11-20 07:36:20.045881] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:45.345 [2024-11-20 07:36:20.045943] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:45.915 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:45.915 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:35:45.915 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:45.915 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:45.915 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:45.915 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.915 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:45.915 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.915 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:45.915 [2024-11-20 07:36:20.653317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:46.176 Malloc0 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:46.176 [2024-11-20 07:36:20.745631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:46.176 { 00:35:46.176 "params": { 00:35:46.176 "name": "Nvme$subsystem", 00:35:46.176 "trtype": "$TEST_TRANSPORT", 00:35:46.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.176 "adrfam": "ipv4", 00:35:46.176 "trsvcid": "$NVMF_PORT", 00:35:46.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.176 "hdgst": ${hdgst:-false}, 00:35:46.176 "ddgst": ${ddgst:-false} 00:35:46.176 }, 00:35:46.176 "method": "bdev_nvme_attach_controller" 00:35:46.176 } 00:35:46.176 EOF 00:35:46.176 )") 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:46.176 07:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:46.176 "params": { 00:35:46.176 "name": "Nvme1", 00:35:46.176 "trtype": "tcp", 00:35:46.176 "traddr": "10.0.0.2", 00:35:46.176 "adrfam": "ipv4", 00:35:46.176 "trsvcid": "4420", 00:35:46.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:46.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:46.176 "hdgst": false, 00:35:46.176 "ddgst": false 00:35:46.176 }, 00:35:46.176 "method": "bdev_nvme_attach_controller" 00:35:46.176 }' 00:35:46.176 [2024-11-20 07:36:20.803800] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:35:46.176 [2024-11-20 07:36:20.803881] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585251 ] 00:35:46.176 [2024-11-20 07:36:20.890437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:46.176 [2024-11-20 07:36:20.934830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.176 [2024-11-20 07:36:20.934977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:46.176 [2024-11-20 07:36:20.935100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.436 I/O targets: 00:35:46.436 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:46.436 00:35:46.436 00:35:46.436 CUnit - A unit testing framework for C - Version 2.1-3 00:35:46.436 http://cunit.sourceforge.net/ 00:35:46.436 00:35:46.436 00:35:46.436 Suite: bdevio tests on: Nvme1n1 00:35:46.436 Test: blockdev write read block ...passed 00:35:46.696 Test: blockdev write zeroes read block ...passed 00:35:46.696 Test: blockdev write zeroes read no split ...passed 00:35:46.696 Test: blockdev write zeroes read split ...passed 00:35:46.696 Test: blockdev write zeroes read split partial ...passed 00:35:46.696 Test: blockdev reset ...[2024-11-20 07:36:21.232719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:46.696 [2024-11-20 07:36:21.232792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15754b0 (9): Bad file descriptor 00:35:46.696 [2024-11-20 07:36:21.278224] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:46.696 passed 00:35:46.696 Test: blockdev write read 8 blocks ...passed 00:35:46.696 Test: blockdev write read size > 128k ...passed 00:35:46.696 Test: blockdev write read invalid size ...passed 00:35:46.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:46.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:46.696 Test: blockdev write read max offset ...passed 00:35:46.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:46.696 Test: blockdev writev readv 8 blocks ...passed 00:35:46.696 Test: blockdev writev readv 30 x 1block ...passed 00:35:46.696 Test: blockdev writev readv block ...passed 00:35:46.696 Test: blockdev writev readv size > 128k ...passed 00:35:46.697 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:46.697 Test: blockdev comparev and writev ...[2024-11-20 07:36:21.457696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.697 [2024-11-20 07:36:21.457723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:46.697 [2024-11-20 07:36:21.457735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.697 [2024-11-20 07:36:21.457744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:46.697 [2024-11-20 07:36:21.458137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.697 [2024-11-20 07:36:21.458146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:46.697 [2024-11-20 07:36:21.458156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.697 [2024-11-20 07:36:21.458162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:46.697 [2024-11-20 07:36:21.458589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.697 [2024-11-20 07:36:21.458598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:46.697 [2024-11-20 07:36:21.458607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.697 [2024-11-20 07:36:21.458613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:46.697 [2024-11-20 07:36:21.459030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.697 [2024-11-20 07:36:21.459039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:46.697 [2024-11-20 07:36:21.459049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:46.697 [2024-11-20 07:36:21.459055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:46.957 passed 00:35:46.957 Test: blockdev nvme passthru rw ...passed 00:35:46.957 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:36:21.542409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:46.957 [2024-11-20 07:36:21.542421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:46.957 [2024-11-20 07:36:21.542662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:46.957 [2024-11-20 07:36:21.542671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:46.957 [2024-11-20 07:36:21.542937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:46.957 [2024-11-20 07:36:21.542946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:46.957 [2024-11-20 07:36:21.543255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:46.957 [2024-11-20 07:36:21.543264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:46.957 passed 00:35:46.957 Test: blockdev nvme admin passthru ...passed 00:35:46.957 Test: blockdev copy ...passed 00:35:46.957 00:35:46.957 Run Summary: Type Total Ran Passed Failed Inactive 00:35:46.957 suites 1 1 n/a 0 0 00:35:46.957 tests 23 23 23 0 0 00:35:46.957 asserts 152 152 152 0 n/a 00:35:46.957 00:35:46.957 Elapsed time = 0.963 seconds 00:35:46.957 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:46.957 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.957 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:46.957 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.957 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:46.957 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:46.957 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:46.957 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:46.957 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:46.957 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:46.957 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:46.957 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:46.957 rmmod nvme_tcp 00:35:47.218 rmmod nvme_fabrics 00:35:47.218 rmmod nvme_keyring 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1585117 ']' 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1585117 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 1585117 ']' 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 1585117 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1585117 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1585117' 00:35:47.218 killing process with pid 1585117 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 1585117 00:35:47.218 07:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 1585117 00:35:47.478 07:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:47.478 07:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:47.478 07:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:47.478 07:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:47.478 07:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:47.478 07:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:47.478 07:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:47.478 07:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:47.478 07:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:47.478 07:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.478 07:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:47.478 07:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.391 07:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:49.391 00:35:49.391 real 0m13.149s 00:35:49.391 user 0m8.837s 00:35:49.391 sys 0m7.340s 00:35:49.391 07:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:49.391 07:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:49.391 ************************************ 00:35:49.391 END TEST nvmf_bdevio 00:35:49.391 ************************************ 00:35:49.652 07:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:49.652 00:35:49.652 real 5m9.839s 00:35:49.652 user 10m19.294s 00:35:49.652 sys 2m11.988s 00:35:49.652 07:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:49.652 07:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:49.652 ************************************ 00:35:49.652 END TEST nvmf_target_core_interrupt_mode 00:35:49.652 ************************************ 00:35:49.652 07:36:24 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:49.652 07:36:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:49.652 07:36:24 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:49.652 07:36:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:49.652 ************************************ 00:35:49.652 START TEST nvmf_interrupt 00:35:49.652 ************************************ 00:35:49.652 07:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:49.652 * Looking for test storage... 00:35:49.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:49.652 07:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:49.652 07:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:35:49.652 07:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:49.913 07:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:49.913 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:49.913 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:49.913 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:49.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.914 --rc genhtml_branch_coverage=1 00:35:49.914 --rc genhtml_function_coverage=1 00:35:49.914 --rc genhtml_legend=1 00:35:49.914 --rc geninfo_all_blocks=1 00:35:49.914 --rc geninfo_unexecuted_blocks=1 00:35:49.914 00:35:49.914 ' 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:49.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.914 --rc genhtml_branch_coverage=1 00:35:49.914 --rc genhtml_function_coverage=1 00:35:49.914 --rc genhtml_legend=1 00:35:49.914 --rc geninfo_all_blocks=1 00:35:49.914 --rc geninfo_unexecuted_blocks=1 00:35:49.914 00:35:49.914 ' 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:49.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.914 --rc genhtml_branch_coverage=1 00:35:49.914 --rc genhtml_function_coverage=1 00:35:49.914 --rc genhtml_legend=1 00:35:49.914 --rc geninfo_all_blocks=1 00:35:49.914 --rc geninfo_unexecuted_blocks=1 00:35:49.914 00:35:49.914 ' 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:49.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.914 --rc genhtml_branch_coverage=1 00:35:49.914 --rc genhtml_function_coverage=1 00:35:49.914 --rc genhtml_legend=1 00:35:49.914 --rc geninfo_all_blocks=1 00:35:49.914 --rc geninfo_unexecuted_blocks=1 00:35:49.914 00:35:49.914 ' 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.914 07:36:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:49.915 07:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:58.062 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:58.062 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:58.062 Found net devices under 0000:31:00.0: cvl_0_0 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:58.062 Found net devices under 0000:31:00.1: cvl_0_1 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:58.062 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:58.063 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:58.063 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:58.063 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:58.063 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:58.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:58.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:35:58.325 00:35:58.325 --- 10.0.0.2 ping statistics --- 00:35:58.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.325 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:58.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:58.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:35:58.325 00:35:58.325 --- 10.0.0.1 ping statistics --- 00:35:58.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.325 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1590273 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1590273 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 1590273 ']' 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:58.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:58.325 07:36:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:58.325 [2024-11-20 07:36:32.978555] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:58.325 [2024-11-20 07:36:32.979713] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:35:58.325 [2024-11-20 07:36:32.979768] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:58.325 [2024-11-20 07:36:33.070897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:58.587 [2024-11-20 07:36:33.111623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:58.587 [2024-11-20 07:36:33.111659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:58.587 [2024-11-20 07:36:33.111667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:58.587 [2024-11-20 07:36:33.111674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:58.587 [2024-11-20 07:36:33.111680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:58.587 [2024-11-20 07:36:33.112986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:58.587 [2024-11-20 07:36:33.113148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:58.587 [2024-11-20 07:36:33.169829] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:58.587 [2024-11-20 07:36:33.170569] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:58.587 [2024-11-20 07:36:33.170839] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:59.160 5000+0 records in 00:35:59.160 5000+0 records out 00:35:59.160 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0183003 s, 560 MB/s 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.160 AIO0 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.160 [2024-11-20 07:36:33.878010] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.160 [2024-11-20 07:36:33.917908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1590273 0 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1590273 0 idle 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1590273 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:59.160 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:59.420 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:59.420 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:59.420 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:59.420 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:59.420 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:59.420 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:59.420 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:59.420 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1590273 -w 256 00:35:59.420 07:36:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1590273 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.26 reactor_0' 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1590273 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.26 reactor_0 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1590273 1 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1590273 1 idle 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1590273 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1590273 -w 256 00:35:59.421 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1590277 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1590277 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1590461 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1590273 0 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1590273 0 busy 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1590273 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:59.682 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:59.683 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:59.683 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:59.683 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1590273 -w 256 00:35:59.683 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1590273 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.47 reactor_0' 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1590273 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.47 reactor_0 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1590273 1 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1590273 1 busy 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1590273 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1590273 -w 256 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1590277 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:00.29 reactor_1' 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1590277 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:00.29 reactor_1 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:59.944 07:36:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1590461 00:36:10.132 Initializing NVMe Controllers 00:36:10.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:10.132 Controller IO queue size 256, less than required. 00:36:10.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:10.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:10.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:10.132 Initialization complete. Launching workers. 00:36:10.132 ======================================================== 00:36:10.132 Latency(us) 00:36:10.132 Device Information : IOPS MiB/s Average min max 00:36:10.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16821.30 65.71 15227.70 2889.39 18780.25 00:36:10.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18633.00 72.79 13741.13 7808.38 28474.75 00:36:10.132 ======================================================== 00:36:10.132 Total : 35454.30 138.49 14446.43 2889.39 28474.75 00:36:10.132 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1590273 0 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1590273 0 idle 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1590273 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1590273 -w 256 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1590273 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.27 reactor_0' 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1590273 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.27 reactor_0 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1590273 1 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1590273 1 idle 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1590273 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:10.132 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:10.133 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1590273 -w 256 00:36:10.133 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:10.133 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1590277 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:36:10.133 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1590277 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:36:10.133 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:10.133 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:10.133 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:10.133 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:10.133 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:10.133 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:10.133 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:10.133 07:36:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:10.133 07:36:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:10.705 07:36:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:10.705 07:36:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:36:10.705 07:36:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:36:10.705 07:36:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:36:10.705 07:36:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1590273 0 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1590273 0 idle 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1590273 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:12.620 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1590273 -w 256 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1590273 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.52 reactor_0' 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1590273 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.52 reactor_0 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1590273 1 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1590273 1 idle 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1590273 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1590273 -w 256 00:36:12.881 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1590277 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1590277 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:13.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:13.143 rmmod nvme_tcp 00:36:13.143 rmmod nvme_fabrics 00:36:13.143 rmmod nvme_keyring 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1590273 ']' 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1590273 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 1590273 ']' 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 1590273 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:36:13.143 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:13.405 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1590273 00:36:13.405 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:13.405 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:13.405 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1590273' 00:36:13.405 killing process with pid 1590273 00:36:13.405 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 1590273 00:36:13.405 07:36:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 1590273 00:36:13.405 07:36:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:13.405 07:36:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:13.405 07:36:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:13.405 07:36:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:13.405 07:36:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:36:13.405 07:36:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:13.405 07:36:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:36:13.405 07:36:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:13.405 07:36:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:13.405 07:36:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.405 07:36:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:13.405 07:36:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.951 07:36:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:15.951 00:36:15.951 real 0m25.949s 00:36:15.951 user 0m40.619s 00:36:15.951 sys 0m10.058s 00:36:15.951 07:36:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:15.951 07:36:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:15.951 ************************************ 00:36:15.951 END TEST nvmf_interrupt 00:36:15.951 ************************************ 00:36:15.951 00:36:15.951 real 31m11.982s 00:36:15.951 user 61m57.754s 00:36:15.951 sys 10m54.726s 00:36:15.951 07:36:50 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:15.951 07:36:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.951 ************************************ 00:36:15.951 END TEST nvmf_tcp 00:36:15.951 ************************************ 00:36:15.951 07:36:50 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:36:15.951 07:36:50 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:15.951 07:36:50 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:15.951 07:36:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:15.951 07:36:50 -- common/autotest_common.sh@10 -- # set +x 00:36:15.951 ************************************ 00:36:15.951 START TEST spdkcli_nvmf_tcp 00:36:15.951 ************************************ 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:15.951 * Looking for test storage... 00:36:15.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.951 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:15.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.952 --rc genhtml_branch_coverage=1 00:36:15.952 --rc genhtml_function_coverage=1 00:36:15.952 --rc genhtml_legend=1 00:36:15.952 --rc geninfo_all_blocks=1 00:36:15.952 --rc geninfo_unexecuted_blocks=1 00:36:15.952 00:36:15.952 ' 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:15.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.952 --rc genhtml_branch_coverage=1 00:36:15.952 --rc genhtml_function_coverage=1 00:36:15.952 --rc genhtml_legend=1 00:36:15.952 --rc geninfo_all_blocks=1 00:36:15.952 --rc geninfo_unexecuted_blocks=1 00:36:15.952 00:36:15.952 ' 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:15.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.952 --rc genhtml_branch_coverage=1 00:36:15.952 --rc genhtml_function_coverage=1 00:36:15.952 --rc genhtml_legend=1 00:36:15.952 --rc geninfo_all_blocks=1 00:36:15.952 --rc geninfo_unexecuted_blocks=1 00:36:15.952 00:36:15.952 ' 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:15.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.952 --rc genhtml_branch_coverage=1 00:36:15.952 --rc genhtml_function_coverage=1 00:36:15.952 --rc genhtml_legend=1 00:36:15.952 --rc geninfo_all_blocks=1 00:36:15.952 --rc geninfo_unexecuted_blocks=1 00:36:15.952 00:36:15.952 ' 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:15.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1593704 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1593704 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 1593704 ']' 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:15.952 07:36:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.952 [2024-11-20 07:36:50.596068] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:36:15.952 [2024-11-20 07:36:50.596130] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593704 ] 00:36:15.952 [2024-11-20 07:36:50.675986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:16.212 [2024-11-20 07:36:50.715429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.212 [2024-11-20 07:36:50.715431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.783 07:36:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:16.783 07:36:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:36:16.783 07:36:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:16.783 07:36:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:16.783 07:36:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.784 07:36:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:16.784 07:36:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:16.784 07:36:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:16.784 07:36:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:16.784 07:36:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.784 07:36:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:16.784 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:16.784 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:16.784 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:16.784 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:16.784 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:16.784 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:16.784 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:16.784 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:16.784 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:16.784 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:16.784 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:16.784 ' 00:36:19.332 [2024-11-20 07:36:53.865794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.718 [2024-11-20 07:36:55.073819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:22.635 [2024-11-20 07:36:57.292236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:24.557 [2024-11-20 07:36:59.218012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:25.947 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:25.947 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:25.947 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:25.947 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:25.947 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:25.947 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:25.947 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:25.947 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:25.947 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:25.947 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:25.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:25.947 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:26.209 07:37:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:26.209 07:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:26.209 07:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:26.209 07:37:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:26.209 07:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:26.209 07:37:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:26.209 07:37:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:26.209 07:37:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:26.470 07:37:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:26.732 07:37:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:26.732 07:37:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:26.732 07:37:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:26.732 07:37:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:26.732 07:37:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:26.732 07:37:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:26.732 07:37:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:26.732 07:37:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:26.732 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:26.732 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:26.732 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:26.732 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:26.732 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:26.732 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:26.733 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:26.733 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:26.733 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:26.733 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:26.733 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:26.733 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:26.733 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:26.733 ' 00:36:32.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:32.020 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:32.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:32.021 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:32.021 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:32.021 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:32.021 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:32.021 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:32.021 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:32.021 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:32.021 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1593704 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 1593704 ']' 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 1593704 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1593704 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1593704' 00:36:32.021 killing process with pid 1593704 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 1593704 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 1593704 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1593704 ']' 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1593704 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 1593704 ']' 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 1593704 00:36:32.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1593704) - No such process 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 1593704 is not found' 00:36:32.021 Process with pid 1593704 is not found 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:32.021 00:36:32.021 real 0m16.284s 00:36:32.021 user 0m33.782s 00:36:32.021 sys 0m0.732s 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:32.021 07:37:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:32.021 ************************************ 00:36:32.021 END TEST spdkcli_nvmf_tcp 00:36:32.021 ************************************ 00:36:32.021 07:37:06 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:32.021 07:37:06 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:32.021 07:37:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:32.021 07:37:06 -- common/autotest_common.sh@10 -- # set +x 00:36:32.021 ************************************ 00:36:32.021 START TEST nvmf_identify_passthru 00:36:32.021 ************************************ 00:36:32.021 07:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:32.021 * Looking for test storage... 00:36:32.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:32.021 07:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:32.021 07:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:36:32.021 07:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:32.283 07:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:32.283 07:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:32.283 07:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:32.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.283 --rc genhtml_branch_coverage=1 00:36:32.283 --rc genhtml_function_coverage=1 00:36:32.283 --rc genhtml_legend=1 00:36:32.283 --rc geninfo_all_blocks=1 00:36:32.283 --rc geninfo_unexecuted_blocks=1 00:36:32.283 00:36:32.283 ' 00:36:32.283 07:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:32.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.283 --rc genhtml_branch_coverage=1 00:36:32.283 --rc genhtml_function_coverage=1 00:36:32.283 --rc genhtml_legend=1 00:36:32.283 --rc geninfo_all_blocks=1 00:36:32.283 --rc geninfo_unexecuted_blocks=1 00:36:32.283 00:36:32.283 ' 00:36:32.283 07:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:32.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.283 --rc genhtml_branch_coverage=1 00:36:32.283 --rc genhtml_function_coverage=1 00:36:32.283 --rc genhtml_legend=1 00:36:32.283 --rc geninfo_all_blocks=1 00:36:32.283 --rc geninfo_unexecuted_blocks=1 00:36:32.283 00:36:32.283 ' 00:36:32.283 07:37:06 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:32.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.283 --rc genhtml_branch_coverage=1 00:36:32.283 --rc genhtml_function_coverage=1 00:36:32.283 --rc genhtml_legend=1 00:36:32.283 --rc geninfo_all_blocks=1 00:36:32.283 --rc geninfo_unexecuted_blocks=1 00:36:32.283 00:36:32.283 ' 00:36:32.283 07:37:06 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.283 07:37:06 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.283 07:37:06 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.283 07:37:06 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.283 07:37:06 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.283 07:37:06 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:32.283 07:37:06 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.283 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:32.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:32.284 07:37:06 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:32.284 07:37:06 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:32.284 07:37:06 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.284 07:37:06 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.284 07:37:06 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.284 07:37:06 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.284 07:37:06 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.284 07:37:06 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.284 07:37:06 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:32.284 07:37:06 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.284 07:37:06 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.284 07:37:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:32.284 07:37:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:32.284 07:37:06 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:32.284 07:37:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:40.436 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:40.437 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:40.437 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:40.437 Found net devices under 0000:31:00.0: cvl_0_0 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:40.437 Found net devices under 0000:31:00.1: cvl_0_1 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:40.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:40.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:36:40.437 00:36:40.437 --- 10.0.0.2 ping statistics --- 00:36:40.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.437 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:40.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:40.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:36:40.437 00:36:40.437 --- 10.0.0.1 ping statistics --- 00:36:40.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.437 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:40.437 07:37:14 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:40.437 07:37:14 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:40.437 07:37:14 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:40.437 07:37:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.437 07:37:14 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:40.437 07:37:14 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:36:40.437 07:37:14 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:36:40.437 07:37:14 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:36:40.437 07:37:14 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:36:40.437 07:37:14 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:36:40.437 07:37:14 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:36:40.437 07:37:14 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:40.437 07:37:14 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:40.437 07:37:14 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:36:40.437 07:37:15 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:36:40.437 07:37:15 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:36:40.437 07:37:15 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:36:40.437 07:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:40.437 07:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:40.437 07:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:40.437 07:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:40.437 07:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:41.009 07:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:36:41.009 07:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:41.009 07:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:41.009 07:37:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:41.581 07:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:41.581 07:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:41.581 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:41.581 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.581 07:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:41.581 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:41.581 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.581 07:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1601263 00:36:41.581 07:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:41.581 07:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:41.581 07:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1601263 00:36:41.581 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 1601263 ']' 00:36:41.581 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:41.581 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:41.581 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:41.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:41.581 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:41.581 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.581 [2024-11-20 07:37:16.167424] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:36:41.581 [2024-11-20 07:37:16.167482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:41.581 [2024-11-20 07:37:16.252972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:41.581 [2024-11-20 07:37:16.291271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:41.581 [2024-11-20 07:37:16.291306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:41.581 [2024-11-20 07:37:16.291314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:41.581 [2024-11-20 07:37:16.291321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:41.581 [2024-11-20 07:37:16.291327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:41.581 [2024-11-20 07:37:16.292851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.581 [2024-11-20 07:37:16.293002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:41.581 [2024-11-20 07:37:16.292877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:41.581 [2024-11-20 07:37:16.293130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:42.524 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:42.524 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:36:42.524 07:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:42.524 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.524 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.524 INFO: Log level set to 20 00:36:42.524 INFO: Requests: 00:36:42.524 { 00:36:42.524 "jsonrpc": "2.0", 00:36:42.524 "method": "nvmf_set_config", 00:36:42.524 "id": 1, 00:36:42.524 "params": { 00:36:42.524 "admin_cmd_passthru": { 00:36:42.524 "identify_ctrlr": true 00:36:42.524 } 00:36:42.524 } 00:36:42.524 } 00:36:42.524 00:36:42.524 INFO: response: 00:36:42.524 { 00:36:42.524 "jsonrpc": "2.0", 00:36:42.524 "id": 1, 00:36:42.524 "result": true 00:36:42.524 } 00:36:42.524 00:36:42.524 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.524 07:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:42.524 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.524 07:37:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.524 INFO: Setting log level to 20 00:36:42.524 INFO: Setting log level to 20 00:36:42.524 INFO: Log level set to 20 00:36:42.524 INFO: Log level set to 20 00:36:42.524 INFO: Requests: 00:36:42.524 { 00:36:42.524 "jsonrpc": "2.0", 00:36:42.524 "method": "framework_start_init", 00:36:42.524 "id": 1 00:36:42.524 } 00:36:42.524 00:36:42.524 INFO: Requests: 00:36:42.524 { 00:36:42.524 "jsonrpc": "2.0", 00:36:42.524 "method": "framework_start_init", 00:36:42.524 "id": 1 00:36:42.524 } 00:36:42.524 00:36:42.524 [2024-11-20 07:37:17.034231] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:42.524 INFO: response: 00:36:42.524 { 00:36:42.524 "jsonrpc": "2.0", 00:36:42.524 "id": 1, 00:36:42.524 "result": true 00:36:42.524 } 00:36:42.524 00:36:42.524 INFO: response: 00:36:42.524 { 00:36:42.524 "jsonrpc": "2.0", 00:36:42.524 "id": 1, 00:36:42.524 "result": true 00:36:42.524 } 00:36:42.524 00:36:42.524 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.524 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:42.524 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.524 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.524 INFO: Setting log level to 40 00:36:42.524 INFO: Setting log level to 40 00:36:42.524 INFO: Setting log level to 40 00:36:42.524 [2024-11-20 07:37:17.047560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:42.524 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.524 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:42.524 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:42.524 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.524 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:42.524 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.524 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.786 Nvme0n1 00:36:42.786 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.786 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:42.786 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.786 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.786 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.786 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:42.786 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.786 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.786 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.786 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:42.786 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.786 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.786 [2024-11-20 07:37:17.448160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:42.786 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.786 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:42.786 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.786 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.786 [ 00:36:42.786 { 00:36:42.786 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:42.786 "subtype": "Discovery", 00:36:42.786 "listen_addresses": [], 00:36:42.786 "allow_any_host": true, 00:36:42.786 "hosts": [] 00:36:42.786 }, 00:36:42.786 { 00:36:42.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:42.786 "subtype": "NVMe", 00:36:42.786 "listen_addresses": [ 00:36:42.786 { 00:36:42.786 "trtype": "TCP", 00:36:42.786 "adrfam": "IPv4", 00:36:42.786 "traddr": "10.0.0.2", 00:36:42.786 "trsvcid": "4420" 00:36:42.786 } 00:36:42.786 ], 00:36:42.786 "allow_any_host": true, 00:36:42.786 "hosts": [], 00:36:42.786 "serial_number": "SPDK00000000000001", 00:36:42.786 "model_number": "SPDK bdev Controller", 00:36:42.786 "max_namespaces": 1, 00:36:42.786 "min_cntlid": 1, 00:36:42.786 "max_cntlid": 65519, 00:36:42.786 "namespaces": [ 00:36:42.786 { 00:36:42.786 "nsid": 1, 00:36:42.786 "bdev_name": "Nvme0n1", 00:36:42.786 "name": "Nvme0n1", 00:36:42.786 "nguid": "3634473052605494002538450000002D", 00:36:42.786 "uuid": "36344730-5260-5494-0025-38450000002d" 00:36:42.786 } 00:36:42.786 ] 00:36:42.786 } 00:36:42.786 ] 00:36:42.786 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.786 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:42.786 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:42.786 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:43.048 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:36:43.048 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:43.048 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:43.048 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:43.309 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:43.309 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:36:43.309 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:43.309 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:43.309 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.309 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:43.309 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.309 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:43.309 07:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:43.309 07:37:17 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:43.309 07:37:17 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:43.309 07:37:17 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:43.309 07:37:17 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:43.309 07:37:17 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:43.309 07:37:17 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:43.309 rmmod nvme_tcp 00:36:43.309 rmmod nvme_fabrics 00:36:43.309 rmmod nvme_keyring 00:36:43.309 07:37:17 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:43.309 07:37:17 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:43.309 07:37:17 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:43.309 07:37:17 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1601263 ']' 00:36:43.309 07:37:17 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1601263 00:36:43.309 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 1601263 ']' 00:36:43.309 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 1601263 00:36:43.309 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:36:43.309 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:43.309 07:37:17 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1601263 00:36:43.309 07:37:18 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:43.309 07:37:18 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:43.309 07:37:18 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1601263' 00:36:43.309 killing process with pid 1601263 00:36:43.309 07:37:18 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 1601263 00:36:43.309 07:37:18 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 1601263 00:36:43.571 07:37:18 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:43.571 07:37:18 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:43.571 07:37:18 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:43.571 07:37:18 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:43.571 07:37:18 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:43.571 07:37:18 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:43.571 07:37:18 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:43.571 07:37:18 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:43.571 07:37:18 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:43.571 07:37:18 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.571 07:37:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:43.571 07:37:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:46.118 07:37:20 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:46.118 00:36:46.118 real 0m13.697s 00:36:46.118 user 0m10.456s 00:36:46.118 sys 0m7.025s 00:36:46.118 07:37:20 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:46.118 07:37:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:46.118 ************************************ 00:36:46.118 END TEST nvmf_identify_passthru 00:36:46.118 ************************************ 00:36:46.118 07:37:20 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:46.118 07:37:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:46.118 07:37:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:46.118 07:37:20 -- common/autotest_common.sh@10 -- # set +x 00:36:46.118 ************************************ 00:36:46.118 START TEST nvmf_dif 00:36:46.118 ************************************ 00:36:46.118 07:37:20 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:46.118 * Looking for test storage... 00:36:46.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:46.118 07:37:20 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:46.118 07:37:20 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:36:46.118 07:37:20 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:46.118 07:37:20 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:46.118 07:37:20 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:46.118 07:37:20 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:46.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.118 --rc genhtml_branch_coverage=1 00:36:46.118 --rc genhtml_function_coverage=1 00:36:46.118 --rc genhtml_legend=1 00:36:46.118 --rc geninfo_all_blocks=1 00:36:46.118 --rc geninfo_unexecuted_blocks=1 00:36:46.118 00:36:46.118 ' 00:36:46.118 07:37:20 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:46.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.118 --rc genhtml_branch_coverage=1 00:36:46.118 --rc genhtml_function_coverage=1 00:36:46.118 --rc genhtml_legend=1 00:36:46.118 --rc geninfo_all_blocks=1 00:36:46.118 --rc geninfo_unexecuted_blocks=1 00:36:46.118 00:36:46.118 ' 00:36:46.118 07:37:20 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:46.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.118 --rc genhtml_branch_coverage=1 00:36:46.118 --rc genhtml_function_coverage=1 00:36:46.118 --rc genhtml_legend=1 00:36:46.118 --rc geninfo_all_blocks=1 00:36:46.118 --rc geninfo_unexecuted_blocks=1 00:36:46.118 00:36:46.118 ' 00:36:46.118 07:37:20 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:46.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.118 --rc genhtml_branch_coverage=1 00:36:46.118 --rc genhtml_function_coverage=1 00:36:46.118 --rc genhtml_legend=1 00:36:46.118 --rc geninfo_all_blocks=1 00:36:46.118 --rc geninfo_unexecuted_blocks=1 00:36:46.118 00:36:46.118 ' 00:36:46.118 07:37:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:46.118 07:37:20 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:46.118 07:37:20 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:46.118 07:37:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.118 07:37:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.119 07:37:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.119 07:37:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:46.119 07:37:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:46.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:46.119 07:37:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:46.119 07:37:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:46.119 07:37:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:46.119 07:37:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:46.119 07:37:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.119 07:37:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:46.119 07:37:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:46.119 07:37:20 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:46.119 07:37:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:54.264 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:54.264 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:54.264 Found net devices under 0000:31:00.0: cvl_0_0 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:54.264 Found net devices under 0000:31:00.1: cvl_0_1 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:54.264 07:37:28 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:54.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:54.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:36:54.264 00:36:54.264 --- 10.0.0.2 ping statistics --- 00:36:54.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:54.264 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:36:54.265 07:37:28 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:54.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:54.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:36:54.265 00:36:54.265 --- 10.0.0.1 ping statistics --- 00:36:54.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:54.265 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:36:54.265 07:37:28 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:54.265 07:37:28 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:54.265 07:37:28 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:54.265 07:37:28 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:57.570 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:57.570 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:57.570 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:57.570 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:57.571 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:57.831 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:57.831 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:57.831 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:57.831 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:57.831 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:57.831 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:57.831 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:57.831 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:57.831 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:57.831 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:57.831 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:57.831 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:58.093 07:37:32 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:58.093 07:37:32 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:58.093 07:37:32 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:58.093 07:37:32 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:58.093 07:37:32 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:58.093 07:37:32 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:58.093 07:37:32 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:58.093 07:37:32 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:58.093 07:37:32 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:58.093 07:37:32 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:58.093 07:37:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:58.093 07:37:32 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1607928 00:36:58.093 07:37:32 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1607928 00:36:58.093 07:37:32 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:58.093 07:37:32 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 1607928 ']' 00:36:58.093 07:37:32 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:58.093 07:37:32 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:58.093 07:37:32 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:58.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:58.093 07:37:32 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:58.093 07:37:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:58.093 [2024-11-20 07:37:32.829010] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:36:58.093 [2024-11-20 07:37:32.829056] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:58.353 [2024-11-20 07:37:32.912390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.353 [2024-11-20 07:37:32.947032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:58.353 [2024-11-20 07:37:32.947061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:58.353 [2024-11-20 07:37:32.947068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:58.353 [2024-11-20 07:37:32.947075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:58.354 [2024-11-20 07:37:32.947081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:58.354 [2024-11-20 07:37:32.947646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:58.927 07:37:33 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:58.927 07:37:33 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:36:58.927 07:37:33 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:58.927 07:37:33 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:58.927 07:37:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:58.927 07:37:33 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:58.927 07:37:33 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:58.927 07:37:33 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:58.927 07:37:33 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.927 07:37:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:58.927 [2024-11-20 07:37:33.677227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:58.927 07:37:33 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.927 07:37:33 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:58.927 07:37:33 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:58.927 07:37:33 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:58.927 07:37:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:59.189 ************************************ 00:36:59.189 START TEST fio_dif_1_default 00:36:59.189 ************************************ 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:59.189 bdev_null0 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:59.189 [2024-11-20 07:37:33.761582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:59.189 { 00:36:59.189 "params": { 00:36:59.189 "name": "Nvme$subsystem", 00:36:59.189 "trtype": "$TEST_TRANSPORT", 00:36:59.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:59.189 "adrfam": "ipv4", 00:36:59.189 "trsvcid": "$NVMF_PORT", 00:36:59.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:59.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:59.189 "hdgst": ${hdgst:-false}, 00:36:59.189 "ddgst": ${ddgst:-false} 00:36:59.189 }, 00:36:59.189 "method": "bdev_nvme_attach_controller" 00:36:59.189 } 00:36:59.189 EOF 00:36:59.189 )") 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:59.189 "params": { 00:36:59.189 "name": "Nvme0", 00:36:59.189 "trtype": "tcp", 00:36:59.189 "traddr": "10.0.0.2", 00:36:59.189 "adrfam": "ipv4", 00:36:59.189 "trsvcid": "4420", 00:36:59.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:59.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:59.189 "hdgst": false, 00:36:59.189 "ddgst": false 00:36:59.189 }, 00:36:59.189 "method": "bdev_nvme_attach_controller" 00:36:59.189 }' 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:59.189 07:37:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.450 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:59.450 fio-3.35 00:36:59.450 Starting 1 thread 00:37:11.678 00:37:11.678 filename0: (groupid=0, jobs=1): err= 0: pid=1608461: Wed Nov 20 07:37:44 2024 00:37:11.678 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10014msec) 00:37:11.678 slat (nsec): min=5485, max=41794, avg=6637.65, stdev=1978.26 00:37:11.678 clat (usec): min=40803, max=43145, avg=41358.57, stdev=571.75 00:37:11.678 lat (usec): min=40811, max=43177, avg=41365.21, stdev=571.88 00:37:11.678 clat percentiles (usec): 00:37:11.678 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:11.678 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:11.678 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:11.678 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:37:11.678 | 99.99th=[43254] 00:37:11.678 bw ( KiB/s): min= 351, max= 416, per=99.57%, avg=385.55, stdev=12.75, samples=20 00:37:11.678 iops : min= 87, max= 104, avg=96.35, stdev= 3.30, samples=20 00:37:11.679 lat (msec) : 50=100.00% 00:37:11.679 cpu : usr=93.38%, sys=6.38%, ctx=12, majf=0, minf=245 00:37:11.679 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:11.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.679 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.679 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:11.679 00:37:11.679 Run status group 0 (all jobs): 00:37:11.679 READ: bw=387KiB/s (396kB/s), 387KiB/s-387KiB/s (396kB/s-396kB/s), io=3872KiB (3965kB), run=10014-10014msec 00:37:11.679 07:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:11.679 07:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:11.679 07:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:11.679 07:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:11.679 07:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:11.679 07:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:11.679 07:37:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.679 07:37:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:11.679 07:37:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.679 07:37:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:11.679 07:37:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.679 07:37:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.679 00:37:11.679 real 0m11.289s 00:37:11.679 user 0m24.374s 00:37:11.679 sys 0m0.975s 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:11.679 ************************************ 00:37:11.679 END TEST fio_dif_1_default 00:37:11.679 ************************************ 00:37:11.679 07:37:45 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:11.679 07:37:45 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:11.679 07:37:45 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:11.679 07:37:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:11.679 ************************************ 00:37:11.679 START TEST fio_dif_1_multi_subsystems 00:37:11.679 ************************************ 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:11.679 bdev_null0 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:11.679 [2024-11-20 07:37:45.128418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:11.679 bdev_null1 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:11.679 { 00:37:11.679 "params": { 00:37:11.679 "name": "Nvme$subsystem", 00:37:11.679 "trtype": "$TEST_TRANSPORT", 00:37:11.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:11.679 "adrfam": "ipv4", 00:37:11.679 "trsvcid": "$NVMF_PORT", 00:37:11.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:11.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:11.679 "hdgst": ${hdgst:-false}, 00:37:11.679 "ddgst": ${ddgst:-false} 00:37:11.679 }, 00:37:11.679 "method": "bdev_nvme_attach_controller" 00:37:11.679 } 00:37:11.679 EOF 00:37:11.679 )") 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:11.679 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:11.680 { 00:37:11.680 "params": { 00:37:11.680 "name": "Nvme$subsystem", 00:37:11.680 "trtype": "$TEST_TRANSPORT", 00:37:11.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:11.680 "adrfam": "ipv4", 00:37:11.680 "trsvcid": "$NVMF_PORT", 00:37:11.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:11.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:11.680 "hdgst": ${hdgst:-false}, 00:37:11.680 "ddgst": ${ddgst:-false} 00:37:11.680 }, 00:37:11.680 "method": "bdev_nvme_attach_controller" 00:37:11.680 } 00:37:11.680 EOF 00:37:11.680 )") 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:11.680 "params": { 00:37:11.680 "name": "Nvme0", 00:37:11.680 "trtype": "tcp", 00:37:11.680 "traddr": "10.0.0.2", 00:37:11.680 "adrfam": "ipv4", 00:37:11.680 "trsvcid": "4420", 00:37:11.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.680 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:11.680 "hdgst": false, 00:37:11.680 "ddgst": false 00:37:11.680 }, 00:37:11.680 "method": "bdev_nvme_attach_controller" 00:37:11.680 },{ 00:37:11.680 "params": { 00:37:11.680 "name": "Nvme1", 00:37:11.680 "trtype": "tcp", 00:37:11.680 "traddr": "10.0.0.2", 00:37:11.680 "adrfam": "ipv4", 00:37:11.680 "trsvcid": "4420", 00:37:11.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:11.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:11.680 "hdgst": false, 00:37:11.680 "ddgst": false 00:37:11.680 }, 00:37:11.680 "method": "bdev_nvme_attach_controller" 00:37:11.680 }' 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:11.680 07:37:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.680 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:11.680 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:11.680 fio-3.35 00:37:11.680 Starting 2 threads 00:37:21.815 00:37:21.815 filename0: (groupid=0, jobs=1): err= 0: pid=1610936: Wed Nov 20 07:37:56 2024 00:37:21.815 read: IOPS=187, BW=748KiB/s (766kB/s)(7488KiB/10006msec) 00:37:21.815 slat (nsec): min=5488, max=29235, avg=6384.94, stdev=1508.25 00:37:21.815 clat (usec): min=667, max=43036, avg=21362.08, stdev=20296.73 00:37:21.815 lat (usec): min=673, max=43042, avg=21368.47, stdev=20296.72 00:37:21.815 clat percentiles (usec): 00:37:21.815 | 1.00th=[ 848], 5.00th=[ 873], 10.00th=[ 898], 20.00th=[ 930], 00:37:21.815 | 30.00th=[ 963], 40.00th=[ 1020], 50.00th=[41157], 60.00th=[41157], 00:37:21.815 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:37:21.815 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:37:21.815 | 99.99th=[43254] 00:37:21.815 bw ( KiB/s): min= 672, max= 768, per=49.92%, avg=747.20, stdev=31.62, samples=20 00:37:21.815 iops : min= 168, max= 192, avg=186.80, stdev= 7.90, samples=20 00:37:21.815 lat (usec) : 750=0.21%, 1000=38.19% 00:37:21.815 lat (msec) : 2=11.38%, 50=50.21% 00:37:21.815 cpu : usr=95.32%, sys=4.48%, ctx=14, majf=0, minf=113 00:37:21.815 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:21.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.815 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.815 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:21.815 filename1: (groupid=0, jobs=1): err= 0: pid=1610937: Wed Nov 20 07:37:56 2024 00:37:21.815 read: IOPS=187, BW=748KiB/s (766kB/s)(7488KiB/10008msec) 00:37:21.815 slat (nsec): min=5493, max=45479, avg=6488.98, stdev=1821.79 00:37:21.815 clat (usec): min=775, max=43303, avg=21365.94, stdev=20266.67 00:37:21.815 lat (usec): min=783, max=43337, avg=21372.42, stdev=20266.65 00:37:21.815 clat percentiles (usec): 00:37:21.815 | 1.00th=[ 865], 5.00th=[ 922], 10.00th=[ 938], 20.00th=[ 955], 00:37:21.815 | 30.00th=[ 996], 40.00th=[ 1045], 50.00th=[41157], 60.00th=[41157], 00:37:21.815 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:37:21.815 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:37:21.815 | 99.99th=[43254] 00:37:21.815 bw ( KiB/s): min= 672, max= 768, per=49.92%, avg=747.20, stdev=33.28, samples=20 00:37:21.815 iops : min= 168, max= 192, avg=186.80, stdev= 8.32, samples=20 00:37:21.815 lat (usec) : 1000=31.62% 00:37:21.815 lat (msec) : 2=18.16%, 50=50.21% 00:37:21.815 cpu : usr=95.35%, sys=4.45%, ctx=13, majf=0, minf=173 00:37:21.815 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:21.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.815 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.815 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:21.815 00:37:21.815 Run status group 0 (all jobs): 00:37:21.815 READ: bw=1496KiB/s (1532kB/s), 748KiB/s-748KiB/s (766kB/s-766kB/s), io=14.6MiB (15.3MB), run=10006-10008msec 00:37:21.815 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:21.815 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:21.815 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:21.815 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:21.815 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:21.815 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:21.815 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.815 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:21.815 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.815 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:21.815 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.815 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.077 00:37:22.077 real 0m11.514s 00:37:22.077 user 0m37.443s 00:37:22.077 sys 0m1.281s 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:22.077 07:37:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:22.077 ************************************ 00:37:22.077 END TEST fio_dif_1_multi_subsystems 00:37:22.077 ************************************ 00:37:22.077 07:37:56 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:22.077 07:37:56 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:22.077 07:37:56 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:22.077 07:37:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:22.077 ************************************ 00:37:22.077 START TEST fio_dif_rand_params 00:37:22.077 ************************************ 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.077 bdev_null0 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.077 [2024-11-20 07:37:56.709383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:22.077 { 00:37:22.077 "params": { 00:37:22.077 "name": "Nvme$subsystem", 00:37:22.077 "trtype": "$TEST_TRANSPORT", 00:37:22.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.077 "adrfam": "ipv4", 00:37:22.077 "trsvcid": "$NVMF_PORT", 00:37:22.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.077 "hdgst": ${hdgst:-false}, 00:37:22.077 "ddgst": ${ddgst:-false} 00:37:22.077 }, 00:37:22.077 "method": "bdev_nvme_attach_controller" 00:37:22.077 } 00:37:22.077 EOF 00:37:22.077 )") 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:22.077 "params": { 00:37:22.077 "name": "Nvme0", 00:37:22.077 "trtype": "tcp", 00:37:22.077 "traddr": "10.0.0.2", 00:37:22.077 "adrfam": "ipv4", 00:37:22.077 "trsvcid": "4420", 00:37:22.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.077 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.077 "hdgst": false, 00:37:22.077 "ddgst": false 00:37:22.077 }, 00:37:22.077 "method": "bdev_nvme_attach_controller" 00:37:22.077 }' 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:22.077 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.078 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:22.078 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:22.078 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:22.078 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:22.078 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:22.078 07:37:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.654 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:22.654 ... 00:37:22.654 fio-3.35 00:37:22.654 Starting 3 threads 00:37:27.936 00:37:27.936 filename0: (groupid=0, jobs=1): err= 0: pid=1613182: Wed Nov 20 07:38:02 2024 00:37:27.936 read: IOPS=220, BW=27.6MiB/s (29.0MB/s)(139MiB/5048msec) 00:37:27.936 slat (nsec): min=8104, max=49414, avg=9654.96, stdev=3152.31 00:37:27.936 clat (usec): min=5704, max=91154, avg=13527.17, stdev=6967.50 00:37:27.936 lat (usec): min=5713, max=91163, avg=13536.82, stdev=6967.75 00:37:27.936 clat percentiles (usec): 00:37:27.936 | 1.00th=[ 7308], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10028], 00:37:27.936 | 30.00th=[10945], 40.00th=[11994], 50.00th=[13042], 60.00th=[13566], 00:37:27.936 | 70.00th=[14222], 80.00th=[14746], 90.00th=[15401], 95.00th=[16450], 00:37:27.936 | 99.00th=[51119], 99.50th=[51643], 99.90th=[53740], 99.95th=[90702], 00:37:27.936 | 99.99th=[90702] 00:37:27.936 bw ( KiB/s): min=22272, max=33280, per=30.61%, avg=28492.80, stdev=3472.66, samples=10 00:37:27.936 iops : min= 174, max= 260, avg=222.60, stdev=27.13, samples=10 00:37:27.936 lat (msec) : 10=19.91%, 20=77.31%, 50=1.08%, 100=1.70% 00:37:27.936 cpu : usr=90.77%, sys=6.28%, ctx=353, majf=0, minf=112 00:37:27.936 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.936 issued rwts: total=1115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.936 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:27.936 filename0: (groupid=0, jobs=1): err= 0: pid=1613183: Wed Nov 20 07:38:02 2024 00:37:27.936 read: IOPS=266, BW=33.4MiB/s (35.0MB/s)(168MiB/5045msec) 00:37:27.936 slat (nsec): min=5532, max=31808, avg=6723.07, stdev=1139.00 00:37:27.936 clat (usec): min=4765, max=50245, avg=11194.04, stdev=4823.87 00:37:27.936 lat (usec): min=4772, max=50277, avg=11200.77, stdev=4824.10 00:37:27.936 clat percentiles (usec): 00:37:27.936 | 1.00th=[ 5342], 5.00th=[ 6652], 10.00th=[ 7439], 20.00th=[ 8160], 00:37:27.936 | 30.00th=[ 9241], 40.00th=[10421], 50.00th=[11207], 60.00th=[11731], 00:37:27.936 | 70.00th=[12387], 80.00th=[12911], 90.00th=[13698], 95.00th=[14484], 00:37:27.936 | 99.00th=[46924], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:37:27.936 | 99.99th=[50070] 00:37:27.936 bw ( KiB/s): min=29952, max=37376, per=36.99%, avg=34432.00, stdev=2169.72, samples=10 00:37:27.936 iops : min= 234, max= 292, avg=269.00, stdev=16.95, samples=10 00:37:27.936 lat (msec) : 10=35.63%, 20=63.10%, 50=1.11%, 100=0.15% 00:37:27.936 cpu : usr=93.89%, sys=5.85%, ctx=10, majf=0, minf=104 00:37:27.936 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.936 issued rwts: total=1347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.936 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:27.936 filename0: (groupid=0, jobs=1): err= 0: pid=1613184: Wed Nov 20 07:38:02 2024 00:37:27.936 read: IOPS=239, BW=30.0MiB/s (31.4MB/s)(151MiB/5045msec) 00:37:27.936 slat (nsec): min=5524, max=32745, avg=7993.39, stdev=1664.86 00:37:27.936 clat (usec): min=5229, max=92575, avg=12472.83, stdev=10915.07 00:37:27.936 lat (usec): min=5237, max=92583, avg=12480.83, stdev=10915.24 00:37:27.936 clat percentiles (usec): 00:37:27.936 | 1.00th=[ 6128], 5.00th=[ 7898], 10.00th=[ 8356], 20.00th=[ 8979], 00:37:27.936 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:37:27.936 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11600], 95.00th=[50070], 00:37:27.936 | 99.00th=[51643], 99.50th=[52167], 99.90th=[92799], 99.95th=[92799], 00:37:27.936 | 99.99th=[92799] 00:37:27.936 bw ( KiB/s): min=26112, max=39424, per=33.19%, avg=30899.20, stdev=4356.26, samples=10 00:37:27.936 iops : min= 204, max= 308, avg=241.40, stdev=34.03, samples=10 00:37:27.936 lat (msec) : 10=57.98%, 20=35.48%, 50=1.65%, 100=4.88% 00:37:27.936 cpu : usr=96.45%, sys=3.29%, ctx=6, majf=0, minf=43 00:37:27.936 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.936 issued rwts: total=1209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.936 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:27.936 00:37:27.936 Run status group 0 (all jobs): 00:37:27.936 READ: bw=90.9MiB/s (95.3MB/s), 27.6MiB/s-33.4MiB/s (29.0MB/s-35.0MB/s), io=459MiB (481MB), run=5045-5048msec 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:28.197 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.198 bdev_null0 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.198 [2024-11-20 07:38:02.856576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.198 bdev_null1 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.198 bdev_null2 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:28.198 { 00:37:28.198 "params": { 00:37:28.198 "name": "Nvme$subsystem", 00:37:28.198 "trtype": "$TEST_TRANSPORT", 00:37:28.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:28.198 "adrfam": "ipv4", 00:37:28.198 "trsvcid": "$NVMF_PORT", 00:37:28.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:28.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:28.198 "hdgst": ${hdgst:-false}, 00:37:28.198 "ddgst": ${ddgst:-false} 00:37:28.198 }, 00:37:28.198 "method": "bdev_nvme_attach_controller" 00:37:28.198 } 00:37:28.198 EOF 00:37:28.198 )") 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:28.198 { 00:37:28.198 "params": { 00:37:28.198 "name": "Nvme$subsystem", 00:37:28.198 "trtype": "$TEST_TRANSPORT", 00:37:28.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:28.198 "adrfam": "ipv4", 00:37:28.198 "trsvcid": "$NVMF_PORT", 00:37:28.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:28.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:28.198 "hdgst": ${hdgst:-false}, 00:37:28.198 "ddgst": ${ddgst:-false} 00:37:28.198 }, 00:37:28.198 "method": "bdev_nvme_attach_controller" 00:37:28.198 } 00:37:28.198 EOF 00:37:28.198 )") 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:28.198 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:28.198 { 00:37:28.198 "params": { 00:37:28.198 "name": "Nvme$subsystem", 00:37:28.198 "trtype": "$TEST_TRANSPORT", 00:37:28.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:28.198 "adrfam": "ipv4", 00:37:28.198 "trsvcid": "$NVMF_PORT", 00:37:28.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:28.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:28.199 "hdgst": ${hdgst:-false}, 00:37:28.199 "ddgst": ${ddgst:-false} 00:37:28.199 }, 00:37:28.199 "method": "bdev_nvme_attach_controller" 00:37:28.199 } 00:37:28.199 EOF 00:37:28.199 )") 00:37:28.199 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:28.199 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:28.199 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:28.199 07:38:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:28.199 "params": { 00:37:28.199 "name": "Nvme0", 00:37:28.199 "trtype": "tcp", 00:37:28.199 "traddr": "10.0.0.2", 00:37:28.199 "adrfam": "ipv4", 00:37:28.199 "trsvcid": "4420", 00:37:28.199 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:28.199 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:28.199 "hdgst": false, 00:37:28.199 "ddgst": false 00:37:28.199 }, 00:37:28.199 "method": "bdev_nvme_attach_controller" 00:37:28.199 },{ 00:37:28.199 "params": { 00:37:28.199 "name": "Nvme1", 00:37:28.199 "trtype": "tcp", 00:37:28.199 "traddr": "10.0.0.2", 00:37:28.199 "adrfam": "ipv4", 00:37:28.199 "trsvcid": "4420", 00:37:28.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:28.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:28.199 "hdgst": false, 00:37:28.199 "ddgst": false 00:37:28.199 }, 00:37:28.199 "method": "bdev_nvme_attach_controller" 00:37:28.199 },{ 00:37:28.199 "params": { 00:37:28.199 "name": "Nvme2", 00:37:28.199 "trtype": "tcp", 00:37:28.199 "traddr": "10.0.0.2", 00:37:28.199 "adrfam": "ipv4", 00:37:28.199 "trsvcid": "4420", 00:37:28.199 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:28.199 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:28.199 "hdgst": false, 00:37:28.199 "ddgst": false 00:37:28.199 }, 00:37:28.199 "method": "bdev_nvme_attach_controller" 00:37:28.199 }' 00:37:28.481 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:28.481 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:28.481 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:28.481 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.481 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:28.481 07:38:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:28.481 07:38:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:28.481 07:38:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:28.482 07:38:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:28.482 07:38:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.751 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:28.751 ... 00:37:28.751 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:28.751 ... 00:37:28.751 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:28.751 ... 00:37:28.751 fio-3.35 00:37:28.751 Starting 24 threads 00:37:40.979 00:37:40.979 filename0: (groupid=0, jobs=1): err= 0: pid=1614585: Wed Nov 20 07:38:14 2024 00:37:40.979 read: IOPS=526, BW=2106KiB/s (2157kB/s)(20.6MiB/10028msec) 00:37:40.979 slat (nsec): min=5668, max=53965, avg=9126.09, stdev=4300.57 00:37:40.979 clat (usec): min=1469, max=36153, avg=30311.68, stdev=6674.92 00:37:40.979 lat (usec): min=1482, max=36160, avg=30320.81, stdev=6674.24 00:37:40.979 clat percentiles (usec): 00:37:40.979 | 1.00th=[ 1713], 5.00th=[18220], 10.00th=[21365], 20.00th=[31327], 00:37:40.979 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:40.979 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:37:40.979 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:37:40.979 | 99.99th=[35914] 00:37:40.980 bw ( KiB/s): min= 1920, max= 3456, per=4.49%, avg=2105.60, stdev=333.46, samples=20 00:37:40.980 iops : min= 480, max= 864, avg=526.40, stdev=83.37, samples=20 00:37:40.980 lat (msec) : 2=2.31%, 4=0.72%, 10=0.04%, 20=3.90%, 50=93.03% 00:37:40.980 cpu : usr=98.97%, sys=0.76%, ctx=38, majf=0, minf=57 00:37:40.980 IO depths : 1=6.0%, 2=12.0%, 4=24.2%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:40.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.980 filename0: (groupid=0, jobs=1): err= 0: pid=1614586: Wed Nov 20 07:38:14 2024 00:37:40.980 read: IOPS=490, BW=1960KiB/s (2008kB/s)(19.2MiB/10018msec) 00:37:40.980 slat (nsec): min=5664, max=62265, avg=13928.50, stdev=10318.66 00:37:40.980 clat (usec): min=12078, max=54580, avg=32541.70, stdev=5584.66 00:37:40.980 lat (usec): min=12088, max=54595, avg=32555.63, stdev=5586.67 00:37:40.980 clat percentiles (usec): 00:37:40.980 | 1.00th=[19006], 5.00th=[22414], 10.00th=[24773], 20.00th=[30802], 00:37:40.980 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:37:40.980 | 70.00th=[33162], 80.00th=[33817], 90.00th=[38536], 95.00th=[43779], 00:37:40.980 | 99.00th=[49021], 99.50th=[50070], 99.90th=[54264], 99.95th=[54264], 00:37:40.980 | 99.99th=[54789] 00:37:40.980 bw ( KiB/s): min= 1664, max= 2160, per=4.18%, avg=1957.60, stdev=151.99, samples=20 00:37:40.980 iops : min= 416, max= 540, avg=489.40, stdev=38.00, samples=20 00:37:40.980 lat (msec) : 20=1.96%, 50=97.56%, 100=0.49% 00:37:40.980 cpu : usr=98.94%, sys=0.78%, ctx=21, majf=0, minf=30 00:37:40.980 IO depths : 1=3.5%, 2=7.4%, 4=17.7%, 8=62.2%, 16=9.2%, 32=0.0%, >=64=0.0% 00:37:40.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 complete : 0=0.0%, 4=92.2%, 8=2.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 issued rwts: total=4910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.980 filename0: (groupid=0, jobs=1): err= 0: pid=1614587: Wed Nov 20 07:38:14 2024 00:37:40.980 read: IOPS=481, BW=1928KiB/s (1974kB/s)(18.8MiB/10006msec) 00:37:40.980 slat (nsec): min=5524, max=54380, avg=10967.07, stdev=7591.64 00:37:40.980 clat (usec): min=9370, max=50072, avg=33151.78, stdev=2807.09 00:37:40.980 lat (usec): min=9376, max=50085, avg=33162.75, stdev=2807.30 00:37:40.980 clat percentiles (usec): 00:37:40.980 | 1.00th=[24249], 5.00th=[29754], 10.00th=[32375], 20.00th=[32637], 00:37:40.980 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:37:40.980 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:37:40.980 | 99.00th=[43254], 99.50th=[45876], 99.90th=[48497], 99.95th=[48497], 00:37:40.980 | 99.99th=[50070] 00:37:40.980 bw ( KiB/s): min= 1808, max= 2000, per=4.10%, avg=1920.00, stdev=50.88, samples=19 00:37:40.980 iops : min= 452, max= 500, avg=480.00, stdev=12.72, samples=19 00:37:40.980 lat (msec) : 10=0.12%, 20=0.44%, 50=99.40%, 100=0.04% 00:37:40.980 cpu : usr=98.87%, sys=0.86%, ctx=14, majf=0, minf=43 00:37:40.980 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=80.6%, 16=18.0%, 32=0.0%, >=64=0.0% 00:37:40.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 complete : 0=0.0%, 4=89.5%, 8=10.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.980 filename0: (groupid=0, jobs=1): err= 0: pid=1614588: Wed Nov 20 07:38:14 2024 00:37:40.980 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10007msec) 00:37:40.980 slat (nsec): min=5654, max=82547, avg=24389.66, stdev=13944.06 00:37:40.980 clat (usec): min=23418, max=40747, avg=33004.61, stdev=1080.21 00:37:40.980 lat (usec): min=23444, max=40763, avg=33029.00, stdev=1080.48 00:37:40.980 clat percentiles (usec): 00:37:40.980 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:37:40.980 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:40.980 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.980 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36963], 00:37:40.980 | 99.99th=[40633] 00:37:40.980 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1919.79, stdev=60.35, samples=19 00:37:40.980 iops : min= 448, max= 512, avg=479.95, stdev=15.09, samples=19 00:37:40.980 lat (msec) : 50=100.00% 00:37:40.980 cpu : usr=98.43%, sys=1.05%, ctx=193, majf=0, minf=32 00:37:40.980 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:40.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.980 filename0: (groupid=0, jobs=1): err= 0: pid=1614589: Wed Nov 20 07:38:14 2024 00:37:40.980 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10003msec) 00:37:40.980 slat (nsec): min=4651, max=64526, avg=18041.45, stdev=10614.00 00:37:40.980 clat (usec): min=23308, max=63481, avg=33188.39, stdev=1666.06 00:37:40.980 lat (usec): min=23345, max=63496, avg=33206.43, stdev=1666.41 00:37:40.980 clat percentiles (usec): 00:37:40.980 | 1.00th=[31327], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:37:40.980 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:37:40.980 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.980 | 99.00th=[35914], 99.50th=[36439], 99.90th=[54789], 99.95th=[54789], 00:37:40.980 | 99.99th=[63701] 00:37:40.980 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1913.05, stdev=67.09, samples=19 00:37:40.980 iops : min= 448, max= 512, avg=478.26, stdev=16.77, samples=19 00:37:40.980 lat (msec) : 50=99.67%, 100=0.33% 00:37:40.980 cpu : usr=98.74%, sys=0.93%, ctx=58, majf=0, minf=34 00:37:40.980 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:40.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.980 filename0: (groupid=0, jobs=1): err= 0: pid=1614590: Wed Nov 20 07:38:14 2024 00:37:40.980 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10015msec) 00:37:40.980 slat (nsec): min=5747, max=62204, avg=12159.69, stdev=7960.96 00:37:40.980 clat (usec): min=20350, max=37148, avg=33059.09, stdev=1434.15 00:37:40.980 lat (usec): min=20356, max=37164, avg=33071.25, stdev=1434.29 00:37:40.980 clat percentiles (usec): 00:37:40.980 | 1.00th=[26346], 5.00th=[32113], 10.00th=[32637], 20.00th=[32637], 00:37:40.980 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:37:40.980 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.980 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:37:40.980 | 99.99th=[36963] 00:37:40.980 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1926.55, stdev=65.32, samples=20 00:37:40.980 iops : min= 448, max= 512, avg=481.60, stdev=16.33, samples=20 00:37:40.980 lat (msec) : 50=100.00% 00:37:40.980 cpu : usr=99.10%, sys=0.63%, ctx=14, majf=0, minf=49 00:37:40.980 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:40.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.980 filename0: (groupid=0, jobs=1): err= 0: pid=1614591: Wed Nov 20 07:38:14 2024 00:37:40.980 read: IOPS=488, BW=1952KiB/s (1999kB/s)(19.1MiB/10010msec) 00:37:40.980 slat (nsec): min=5671, max=62021, avg=13947.48, stdev=9867.83 00:37:40.980 clat (usec): min=10256, max=47704, avg=32673.84, stdev=3507.60 00:37:40.980 lat (usec): min=10268, max=47710, avg=32687.79, stdev=3507.52 00:37:40.980 clat percentiles (usec): 00:37:40.980 | 1.00th=[15664], 5.00th=[29754], 10.00th=[32113], 20.00th=[32637], 00:37:40.980 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:37:40.980 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.980 | 99.00th=[44827], 99.50th=[46400], 99.90th=[46924], 99.95th=[46924], 00:37:40.980 | 99.99th=[47449] 00:37:40.980 bw ( KiB/s): min= 1792, max= 2304, per=4.17%, avg=1956.21, stdev=117.44, samples=19 00:37:40.980 iops : min= 448, max= 576, avg=489.05, stdev=29.36, samples=19 00:37:40.980 lat (msec) : 20=1.84%, 50=98.16% 00:37:40.980 cpu : usr=98.88%, sys=0.82%, ctx=51, majf=0, minf=48 00:37:40.980 IO depths : 1=1.7%, 2=7.8%, 4=24.4%, 8=55.3%, 16=10.8%, 32=0.0%, >=64=0.0% 00:37:40.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 complete : 0=0.0%, 4=94.2%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.980 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.980 filename0: (groupid=0, jobs=1): err= 0: pid=1614593: Wed Nov 20 07:38:14 2024 00:37:40.980 read: IOPS=481, BW=1925KiB/s (1972kB/s)(18.8MiB/10005msec) 00:37:40.980 slat (nsec): min=5431, max=83332, avg=24151.47, stdev=13535.09 00:37:40.980 clat (usec): min=9353, max=46639, avg=33004.71, stdev=1855.66 00:37:40.980 lat (usec): min=9360, max=46656, avg=33028.86, stdev=1855.55 00:37:40.980 clat percentiles (usec): 00:37:40.980 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:37:40.980 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:37:40.980 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.980 | 99.00th=[35914], 99.50th=[36439], 99.90th=[46400], 99.95th=[46400], 00:37:40.980 | 99.99th=[46400] 00:37:40.980 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1920.16, stdev=59.99, samples=19 00:37:40.980 iops : min= 448, max= 512, avg=480.00, stdev=15.08, samples=19 00:37:40.980 lat (msec) : 10=0.29%, 20=0.04%, 50=99.67% 00:37:40.980 cpu : usr=99.01%, sys=0.73%, ctx=14, majf=0, minf=32 00:37:40.980 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:40.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.981 filename1: (groupid=0, jobs=1): err= 0: pid=1614594: Wed Nov 20 07:38:14 2024 00:37:40.981 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10010msec) 00:37:40.981 slat (nsec): min=5685, max=64081, avg=14783.76, stdev=10436.77 00:37:40.981 clat (usec): min=8654, max=36229, avg=32693.85, stdev=2933.01 00:37:40.981 lat (usec): min=8663, max=36235, avg=32708.64, stdev=2932.32 00:37:40.981 clat percentiles (usec): 00:37:40.981 | 1.00th=[15795], 5.00th=[31589], 10.00th=[32375], 20.00th=[32637], 00:37:40.981 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:37:40.981 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.981 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:37:40.981 | 99.99th=[36439] 00:37:40.981 bw ( KiB/s): min= 1792, max= 2304, per=4.17%, avg=1953.68, stdev=103.13, samples=19 00:37:40.981 iops : min= 448, max= 576, avg=488.42, stdev=25.78, samples=19 00:37:40.981 lat (msec) : 10=0.04%, 20=1.93%, 50=98.03% 00:37:40.981 cpu : usr=98.98%, sys=0.75%, ctx=12, majf=0, minf=49 00:37:40.981 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:40.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.981 filename1: (groupid=0, jobs=1): err= 0: pid=1614595: Wed Nov 20 07:38:14 2024 00:37:40.981 read: IOPS=481, BW=1928KiB/s (1974kB/s)(18.8MiB/10006msec) 00:37:40.981 slat (nsec): min=4909, max=69379, avg=13981.81, stdev=10954.97 00:37:40.981 clat (usec): min=9169, max=53091, avg=33121.38, stdev=3164.98 00:37:40.981 lat (usec): min=9175, max=53100, avg=33135.36, stdev=3164.28 00:37:40.981 clat percentiles (usec): 00:37:40.981 | 1.00th=[22152], 5.00th=[31327], 10.00th=[32375], 20.00th=[32637], 00:37:40.981 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:37:40.981 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:37:40.981 | 99.00th=[45876], 99.50th=[47449], 99.90th=[53216], 99.95th=[53216], 00:37:40.981 | 99.99th=[53216] 00:37:40.981 bw ( KiB/s): min= 1840, max= 2000, per=4.10%, avg=1922.53, stdev=39.29, samples=19 00:37:40.981 iops : min= 460, max= 500, avg=480.63, stdev= 9.82, samples=19 00:37:40.981 lat (msec) : 10=0.21%, 20=0.54%, 50=99.13%, 100=0.12% 00:37:40.981 cpu : usr=98.94%, sys=0.79%, ctx=14, majf=0, minf=32 00:37:40.981 IO depths : 1=0.5%, 2=2.2%, 4=7.6%, 8=73.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:37:40.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 complete : 0=0.0%, 4=90.7%, 8=7.2%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.981 filename1: (groupid=0, jobs=1): err= 0: pid=1614596: Wed Nov 20 07:38:14 2024 00:37:40.981 read: IOPS=481, BW=1928KiB/s (1974kB/s)(18.8MiB/10006msec) 00:37:40.981 slat (nsec): min=5785, max=69983, avg=20052.96, stdev=11634.28 00:37:40.981 clat (usec): min=20921, max=53636, avg=33033.51, stdev=1694.24 00:37:40.981 lat (usec): min=20929, max=53655, avg=33053.56, stdev=1694.01 00:37:40.981 clat percentiles (usec): 00:37:40.981 | 1.00th=[24249], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:37:40.981 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:37:40.981 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.981 | 99.00th=[35914], 99.50th=[36439], 99.90th=[51643], 99.95th=[51643], 00:37:40.981 | 99.99th=[53740] 00:37:40.981 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1922.32, stdev=61.35, samples=19 00:37:40.981 iops : min= 448, max= 512, avg=480.58, stdev=15.34, samples=19 00:37:40.981 lat (msec) : 50=99.79%, 100=0.21% 00:37:40.981 cpu : usr=98.94%, sys=0.80%, ctx=18, majf=0, minf=34 00:37:40.981 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:40.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.981 filename1: (groupid=0, jobs=1): err= 0: pid=1614597: Wed Nov 20 07:38:14 2024 00:37:40.981 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10005msec) 00:37:40.981 slat (nsec): min=5525, max=58214, avg=13441.67, stdev=7743.27 00:37:40.981 clat (usec): min=8954, max=68307, avg=33137.72, stdev=3055.95 00:37:40.981 lat (usec): min=8959, max=68325, avg=33151.16, stdev=3056.10 00:37:40.981 clat percentiles (usec): 00:37:40.981 | 1.00th=[21627], 5.00th=[31851], 10.00th=[32637], 20.00th=[32637], 00:37:40.981 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:37:40.981 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.981 | 99.00th=[36439], 99.50th=[47449], 99.90th=[68682], 99.95th=[68682], 00:37:40.981 | 99.99th=[68682] 00:37:40.981 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1911.74, stdev=66.82, samples=19 00:37:40.981 iops : min= 448, max= 512, avg=477.89, stdev=16.78, samples=19 00:37:40.981 lat (msec) : 10=0.33%, 20=0.19%, 50=98.98%, 100=0.50% 00:37:40.981 cpu : usr=98.72%, sys=0.87%, ctx=40, majf=0, minf=33 00:37:40.981 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:40.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 issued rwts: total=4812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.981 filename1: (groupid=0, jobs=1): err= 0: pid=1614598: Wed Nov 20 07:38:14 2024 00:37:40.981 read: IOPS=481, BW=1926KiB/s (1972kB/s)(18.8MiB/10004msec) 00:37:40.981 slat (nsec): min=5720, max=66965, avg=22426.00, stdev=10819.76 00:37:40.981 clat (usec): min=9295, max=46786, avg=33027.12, stdev=1875.31 00:37:40.981 lat (usec): min=9301, max=46802, avg=33049.55, stdev=1875.58 00:37:40.981 clat percentiles (usec): 00:37:40.981 | 1.00th=[31065], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:37:40.981 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:37:40.981 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.981 | 99.00th=[35914], 99.50th=[36439], 99.90th=[46924], 99.95th=[46924], 00:37:40.981 | 99.99th=[46924] 00:37:40.981 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1920.00, stdev=60.34, samples=19 00:37:40.981 iops : min= 448, max= 512, avg=480.00, stdev=15.08, samples=19 00:37:40.981 lat (msec) : 10=0.33%, 50=99.67% 00:37:40.981 cpu : usr=98.01%, sys=1.26%, ctx=263, majf=0, minf=29 00:37:40.981 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:40.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.981 filename1: (groupid=0, jobs=1): err= 0: pid=1614599: Wed Nov 20 07:38:14 2024 00:37:40.981 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10003msec) 00:37:40.981 slat (nsec): min=4616, max=82526, avg=22985.13, stdev=14438.27 00:37:40.981 clat (usec): min=23362, max=54924, avg=33142.35, stdev=1610.90 00:37:40.981 lat (usec): min=23389, max=54937, avg=33165.33, stdev=1609.27 00:37:40.981 clat percentiles (usec): 00:37:40.981 | 1.00th=[31327], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:37:40.981 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:37:40.981 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.981 | 99.00th=[35914], 99.50th=[36439], 99.90th=[54789], 99.95th=[54789], 00:37:40.981 | 99.99th=[54789] 00:37:40.981 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1913.05, stdev=67.09, samples=19 00:37:40.981 iops : min= 448, max= 512, avg=478.26, stdev=16.77, samples=19 00:37:40.981 lat (msec) : 50=99.67%, 100=0.33% 00:37:40.981 cpu : usr=98.28%, sys=1.15%, ctx=170, majf=0, minf=33 00:37:40.981 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:40.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.981 filename1: (groupid=0, jobs=1): err= 0: pid=1614600: Wed Nov 20 07:38:14 2024 00:37:40.981 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10010msec) 00:37:40.981 slat (nsec): min=5679, max=62271, avg=10608.54, stdev=7960.82 00:37:40.981 clat (usec): min=10143, max=45440, avg=31889.89, stdev=3940.06 00:37:40.981 lat (usec): min=10154, max=45457, avg=31900.50, stdev=3939.88 00:37:40.981 clat percentiles (usec): 00:37:40.981 | 1.00th=[15795], 5.00th=[22414], 10.00th=[27395], 20.00th=[32375], 00:37:40.981 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:37:40.981 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.981 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:37:40.981 | 99.99th=[45351] 00:37:40.981 bw ( KiB/s): min= 1920, max= 2304, per=4.27%, avg=2000.84, stdev=114.57, samples=19 00:37:40.981 iops : min= 480, max= 576, avg=500.21, stdev=28.64, samples=19 00:37:40.981 lat (msec) : 20=3.83%, 50=96.17% 00:37:40.981 cpu : usr=98.89%, sys=0.83%, ctx=15, majf=0, minf=33 00:37:40.981 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:40.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.981 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.981 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.981 filename1: (groupid=0, jobs=1): err= 0: pid=1614602: Wed Nov 20 07:38:14 2024 00:37:40.981 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10017msec) 00:37:40.981 slat (nsec): min=5723, max=57367, avg=10257.42, stdev=6626.45 00:37:40.981 clat (usec): min=19837, max=36738, avg=33081.60, stdev=1442.44 00:37:40.981 lat (usec): min=19844, max=36754, avg=33091.86, stdev=1442.26 00:37:40.981 clat percentiles (usec): 00:37:40.981 | 1.00th=[26346], 5.00th=[32113], 10.00th=[32637], 20.00th=[32637], 00:37:40.982 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:37:40.982 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.982 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:37:40.982 | 99.99th=[36963] 00:37:40.982 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1926.40, stdev=65.33, samples=20 00:37:40.982 iops : min= 448, max= 512, avg=481.60, stdev=16.33, samples=20 00:37:40.982 lat (msec) : 20=0.33%, 50=99.67% 00:37:40.982 cpu : usr=98.88%, sys=0.85%, ctx=12, majf=0, minf=72 00:37:40.982 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:40.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.982 filename2: (groupid=0, jobs=1): err= 0: pid=1614603: Wed Nov 20 07:38:14 2024 00:37:40.982 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10012msec) 00:37:40.982 slat (nsec): min=5697, max=63257, avg=16039.67, stdev=10145.95 00:37:40.982 clat (usec): min=16347, max=36300, avg=33003.39, stdev=1630.24 00:37:40.982 lat (usec): min=16359, max=36320, avg=33019.43, stdev=1629.77 00:37:40.982 clat percentiles (usec): 00:37:40.982 | 1.00th=[26346], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:37:40.982 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:37:40.982 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.982 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:37:40.982 | 99.99th=[36439] 00:37:40.982 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1926.74, stdev=67.11, samples=19 00:37:40.982 iops : min= 448, max= 512, avg=481.68, stdev=16.78, samples=19 00:37:40.982 lat (msec) : 20=0.54%, 50=99.46% 00:37:40.982 cpu : usr=98.67%, sys=0.97%, ctx=59, majf=0, minf=36 00:37:40.982 IO depths : 1=6.0%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:40.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.982 filename2: (groupid=0, jobs=1): err= 0: pid=1614604: Wed Nov 20 07:38:14 2024 00:37:40.982 read: IOPS=553, BW=2212KiB/s (2265kB/s)(21.6MiB/10010msec) 00:37:40.982 slat (nsec): min=5683, max=58244, avg=8619.30, stdev=4059.47 00:37:40.982 clat (usec): min=8543, max=36017, avg=28851.59, stdev=5589.82 00:37:40.982 lat (usec): min=8553, max=36024, avg=28860.21, stdev=5590.77 00:37:40.982 clat percentiles (usec): 00:37:40.982 | 1.00th=[15795], 5.00th=[18482], 10.00th=[19530], 20.00th=[22938], 00:37:40.982 | 30.00th=[25560], 40.00th=[31851], 50.00th=[32637], 60.00th=[32637], 00:37:40.982 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:37:40.982 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:37:40.982 | 99.99th=[35914] 00:37:40.982 bw ( KiB/s): min= 1920, max= 2688, per=4.71%, avg=2209.68, stdev=244.51, samples=19 00:37:40.982 iops : min= 480, max= 672, avg=552.42, stdev=61.13, samples=19 00:37:40.982 lat (msec) : 10=0.04%, 20=10.66%, 50=89.31% 00:37:40.982 cpu : usr=98.92%, sys=0.80%, ctx=15, majf=0, minf=52 00:37:40.982 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:40.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.982 filename2: (groupid=0, jobs=1): err= 0: pid=1614605: Wed Nov 20 07:38:14 2024 00:37:40.982 read: IOPS=499, BW=2000KiB/s (2048kB/s)(19.6MiB/10014msec) 00:37:40.982 slat (nsec): min=5672, max=77467, avg=17073.85, stdev=14266.84 00:37:40.982 clat (usec): min=13681, max=57503, avg=31866.13, stdev=5900.15 00:37:40.982 lat (usec): min=13687, max=57510, avg=31883.21, stdev=5902.53 00:37:40.982 clat percentiles (usec): 00:37:40.982 | 1.00th=[19530], 5.00th=[21365], 10.00th=[23462], 20.00th=[26870], 00:37:40.982 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:37:40.982 | 70.00th=[33162], 80.00th=[33817], 90.00th=[38011], 95.00th=[44303], 00:37:40.982 | 99.00th=[47973], 99.50th=[49546], 99.90th=[56886], 99.95th=[57410], 00:37:40.982 | 99.99th=[57410] 00:37:40.982 bw ( KiB/s): min= 1632, max= 2416, per=4.29%, avg=2011.11, stdev=194.23, samples=19 00:37:40.982 iops : min= 408, max= 604, avg=502.74, stdev=48.58, samples=19 00:37:40.982 lat (msec) : 20=1.58%, 50=98.10%, 100=0.32% 00:37:40.982 cpu : usr=99.09%, sys=0.65%, ctx=15, majf=0, minf=25 00:37:40.982 IO depths : 1=3.2%, 2=6.5%, 4=15.7%, 8=64.6%, 16=10.0%, 32=0.0%, >=64=0.0% 00:37:40.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 complete : 0=0.0%, 4=91.7%, 8=3.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.982 filename2: (groupid=0, jobs=1): err= 0: pid=1614606: Wed Nov 20 07:38:14 2024 00:37:40.982 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10002msec) 00:37:40.982 slat (nsec): min=4299, max=83907, avg=21957.20, stdev=13394.83 00:37:40.982 clat (usec): min=23315, max=53982, avg=33155.18, stdev=1568.04 00:37:40.982 lat (usec): min=23336, max=53994, avg=33177.14, stdev=1566.12 00:37:40.982 clat percentiles (usec): 00:37:40.982 | 1.00th=[31327], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:37:40.982 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:37:40.982 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.982 | 99.00th=[35914], 99.50th=[36963], 99.90th=[53740], 99.95th=[53740], 00:37:40.982 | 99.99th=[53740] 00:37:40.982 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1913.21, stdev=66.79, samples=19 00:37:40.982 iops : min= 448, max= 512, avg=478.26, stdev=16.77, samples=19 00:37:40.982 lat (msec) : 50=99.67%, 100=0.33% 00:37:40.982 cpu : usr=98.37%, sys=1.03%, ctx=141, majf=0, minf=25 00:37:40.982 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:40.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.982 filename2: (groupid=0, jobs=1): err= 0: pid=1614607: Wed Nov 20 07:38:14 2024 00:37:40.982 read: IOPS=485, BW=1941KiB/s (1988kB/s)(19.0MiB/10022msec) 00:37:40.982 slat (nsec): min=5724, max=60547, avg=16159.79, stdev=9992.55 00:37:40.982 clat (usec): min=8574, max=36232, avg=32818.67, stdev=2634.45 00:37:40.982 lat (usec): min=8583, max=36258, avg=32834.83, stdev=2633.73 00:37:40.982 clat percentiles (usec): 00:37:40.982 | 1.00th=[15926], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:37:40.982 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:37:40.982 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.982 | 99.00th=[35390], 99.50th=[35914], 99.90th=[35914], 99.95th=[36439], 00:37:40.982 | 99.99th=[36439] 00:37:40.982 bw ( KiB/s): min= 1792, max= 2304, per=4.14%, avg=1939.20, stdev=95.38, samples=20 00:37:40.982 iops : min= 448, max= 576, avg=484.80, stdev=23.85, samples=20 00:37:40.982 lat (msec) : 10=0.10%, 20=1.54%, 50=98.36% 00:37:40.982 cpu : usr=98.99%, sys=0.73%, ctx=15, majf=0, minf=32 00:37:40.982 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:40.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.982 filename2: (groupid=0, jobs=1): err= 0: pid=1614608: Wed Nov 20 07:38:14 2024 00:37:40.982 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10005msec) 00:37:40.982 slat (nsec): min=5667, max=67471, avg=14429.13, stdev=10420.48 00:37:40.982 clat (usec): min=9361, max=68578, avg=32720.26, stdev=4507.30 00:37:40.982 lat (usec): min=9375, max=68595, avg=32734.69, stdev=4507.41 00:37:40.982 clat percentiles (usec): 00:37:40.982 | 1.00th=[17433], 5.00th=[23987], 10.00th=[30540], 20.00th=[32637], 00:37:40.982 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:37:40.982 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:37:40.982 | 99.00th=[49021], 99.50th=[50070], 99.90th=[68682], 99.95th=[68682], 00:37:40.982 | 99.99th=[68682] 00:37:40.982 bw ( KiB/s): min= 1792, max= 2096, per=4.14%, avg=1942.05, stdev=89.32, samples=19 00:37:40.982 iops : min= 448, max= 524, avg=485.47, stdev=22.40, samples=19 00:37:40.982 lat (msec) : 10=0.21%, 20=1.46%, 50=97.77%, 100=0.57% 00:37:40.982 cpu : usr=98.92%, sys=0.80%, ctx=35, majf=0, minf=45 00:37:40.982 IO depths : 1=1.1%, 2=4.8%, 4=15.5%, 8=65.4%, 16=13.2%, 32=0.0%, >=64=0.0% 00:37:40.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 complete : 0=0.0%, 4=92.2%, 8=3.9%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.982 issued rwts: total=4878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.982 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.982 filename2: (groupid=0, jobs=1): err= 0: pid=1614609: Wed Nov 20 07:38:14 2024 00:37:40.982 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10005msec) 00:37:40.982 slat (nsec): min=5677, max=57895, avg=12458.13, stdev=8558.08 00:37:40.982 clat (usec): min=9378, max=68260, avg=33229.86, stdev=2527.77 00:37:40.982 lat (usec): min=9384, max=68275, avg=33242.32, stdev=2527.40 00:37:40.982 clat percentiles (usec): 00:37:40.982 | 1.00th=[31327], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:37:40.982 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:37:40.982 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.982 | 99.00th=[35914], 99.50th=[36439], 99.90th=[68682], 99.95th=[68682], 00:37:40.982 | 99.99th=[68682] 00:37:40.982 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1913.42, stdev=66.81, samples=19 00:37:40.982 iops : min= 448, max= 512, avg=478.32, stdev=16.78, samples=19 00:37:40.982 lat (msec) : 10=0.15%, 20=0.19%, 50=99.33%, 100=0.33% 00:37:40.982 cpu : usr=98.87%, sys=0.85%, ctx=15, majf=0, minf=31 00:37:40.982 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:40.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.983 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.983 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.983 filename2: (groupid=0, jobs=1): err= 0: pid=1614610: Wed Nov 20 07:38:14 2024 00:37:40.983 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10002msec) 00:37:40.983 slat (nsec): min=4563, max=77534, avg=12814.73, stdev=11418.59 00:37:40.983 clat (usec): min=23327, max=62611, avg=33237.87, stdev=1639.97 00:37:40.983 lat (usec): min=23348, max=62625, avg=33250.68, stdev=1638.17 00:37:40.983 clat percentiles (usec): 00:37:40.983 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:37:40.983 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:37:40.983 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:37:40.983 | 99.00th=[36439], 99.50th=[36963], 99.90th=[53740], 99.95th=[53740], 00:37:40.983 | 99.99th=[62653] 00:37:40.983 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1913.21, stdev=66.79, samples=19 00:37:40.983 iops : min= 448, max= 512, avg=478.26, stdev=16.77, samples=19 00:37:40.983 lat (msec) : 50=99.67%, 100=0.33% 00:37:40.983 cpu : usr=98.15%, sys=1.27%, ctx=144, majf=0, minf=44 00:37:40.983 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:40.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.983 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.983 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.983 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:40.983 00:37:40.983 Run status group 0 (all jobs): 00:37:40.983 READ: bw=45.8MiB/s (48.0MB/s), 1919KiB/s-2212KiB/s (1965kB/s-2265kB/s), io=459MiB (481MB), run=10002-10028msec 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 bdev_null0 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 [2024-11-20 07:38:14.596039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 bdev_null1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:40.983 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:40.984 { 00:37:40.984 "params": { 00:37:40.984 "name": "Nvme$subsystem", 00:37:40.984 "trtype": "$TEST_TRANSPORT", 00:37:40.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:40.984 "adrfam": "ipv4", 00:37:40.984 "trsvcid": "$NVMF_PORT", 00:37:40.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:40.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:40.984 "hdgst": ${hdgst:-false}, 00:37:40.984 "ddgst": ${ddgst:-false} 00:37:40.984 }, 00:37:40.984 "method": "bdev_nvme_attach_controller" 00:37:40.984 } 00:37:40.984 EOF 00:37:40.984 )") 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:40.984 { 00:37:40.984 "params": { 00:37:40.984 "name": "Nvme$subsystem", 00:37:40.984 "trtype": "$TEST_TRANSPORT", 00:37:40.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:40.984 "adrfam": "ipv4", 00:37:40.984 "trsvcid": "$NVMF_PORT", 00:37:40.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:40.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:40.984 "hdgst": ${hdgst:-false}, 00:37:40.984 "ddgst": ${ddgst:-false} 00:37:40.984 }, 00:37:40.984 "method": "bdev_nvme_attach_controller" 00:37:40.984 } 00:37:40.984 EOF 00:37:40.984 )") 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:40.984 "params": { 00:37:40.984 "name": "Nvme0", 00:37:40.984 "trtype": "tcp", 00:37:40.984 "traddr": "10.0.0.2", 00:37:40.984 "adrfam": "ipv4", 00:37:40.984 "trsvcid": "4420", 00:37:40.984 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:40.984 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:40.984 "hdgst": false, 00:37:40.984 "ddgst": false 00:37:40.984 }, 00:37:40.984 "method": "bdev_nvme_attach_controller" 00:37:40.984 },{ 00:37:40.984 "params": { 00:37:40.984 "name": "Nvme1", 00:37:40.984 "trtype": "tcp", 00:37:40.984 "traddr": "10.0.0.2", 00:37:40.984 "adrfam": "ipv4", 00:37:40.984 "trsvcid": "4420", 00:37:40.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:40.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:40.984 "hdgst": false, 00:37:40.984 "ddgst": false 00:37:40.984 }, 00:37:40.984 "method": "bdev_nvme_attach_controller" 00:37:40.984 }' 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:40.984 07:38:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:40.984 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:40.984 ... 00:37:40.984 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:40.984 ... 00:37:40.984 fio-3.35 00:37:40.984 Starting 4 threads 00:37:46.266 00:37:46.266 filename0: (groupid=0, jobs=1): err= 0: pid=1616896: Wed Nov 20 07:38:20 2024 00:37:46.266 read: IOPS=2140, BW=16.7MiB/s (17.5MB/s)(83.7MiB/5004msec) 00:37:46.266 slat (nsec): min=5481, max=49592, avg=8533.14, stdev=3714.37 00:37:46.266 clat (usec): min=1271, max=6596, avg=3714.60, stdev=558.16 00:37:46.266 lat (usec): min=1280, max=6604, avg=3723.14, stdev=557.94 00:37:46.266 clat percentiles (usec): 00:37:46.266 | 1.00th=[ 2409], 5.00th=[ 2868], 10.00th=[ 3097], 20.00th=[ 3294], 00:37:46.266 | 30.00th=[ 3490], 40.00th=[ 3621], 50.00th=[ 3785], 60.00th=[ 3818], 00:37:46.266 | 70.00th=[ 3851], 80.00th=[ 4047], 90.00th=[ 4293], 95.00th=[ 4686], 00:37:46.266 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 6063], 99.95th=[ 6128], 00:37:46.266 | 99.99th=[ 6587] 00:37:46.266 bw ( KiB/s): min=16304, max=18517, per=25.75%, avg=17133.80, stdev=610.60, samples=10 00:37:46.266 iops : min= 2038, max= 2314, avg=2141.60, stdev=76.13, samples=10 00:37:46.266 lat (msec) : 2=0.55%, 4=78.77%, 10=20.68% 00:37:46.266 cpu : usr=96.90%, sys=2.80%, ctx=5, majf=0, minf=83 00:37:46.266 IO depths : 1=0.1%, 2=1.5%, 4=67.2%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.266 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.266 issued rwts: total=10713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.266 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:46.266 filename0: (groupid=0, jobs=1): err= 0: pid=1616897: Wed Nov 20 07:38:20 2024 00:37:46.266 read: IOPS=1996, BW=15.6MiB/s (16.4MB/s)(78.0MiB/5002msec) 00:37:46.266 slat (nsec): min=7992, max=64956, avg=9158.84, stdev=3177.73 00:37:46.266 clat (usec): min=1597, max=46930, avg=3981.94, stdev=1300.22 00:37:46.266 lat (usec): min=1605, max=46955, avg=3991.10, stdev=1300.19 00:37:46.266 clat percentiles (usec): 00:37:46.266 | 1.00th=[ 3228], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3621], 00:37:46.266 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:37:46.266 | 70.00th=[ 4047], 80.00th=[ 4146], 90.00th=[ 4424], 95.00th=[ 4752], 00:37:46.266 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 6587], 99.95th=[46924], 00:37:46.266 | 99.99th=[46924] 00:37:46.266 bw ( KiB/s): min=15454, max=16864, per=24.00%, avg=15972.60, stdev=394.82, samples=10 00:37:46.266 iops : min= 1931, max= 2108, avg=1996.50, stdev=49.46, samples=10 00:37:46.266 lat (msec) : 2=0.02%, 4=67.67%, 10=32.23%, 50=0.08% 00:37:46.266 cpu : usr=94.16%, sys=3.98%, ctx=163, majf=0, minf=77 00:37:46.266 IO depths : 1=0.1%, 2=0.1%, 4=72.4%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.266 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.266 issued rwts: total=9986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.266 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:46.266 filename1: (groupid=0, jobs=1): err= 0: pid=1616898: Wed Nov 20 07:38:20 2024 00:37:46.266 read: IOPS=1989, BW=15.5MiB/s (16.3MB/s)(77.7MiB/5002msec) 00:37:46.266 slat (nsec): min=5481, max=42912, avg=8083.07, stdev=2988.65 00:37:46.266 clat (usec): min=1854, max=7281, avg=3999.17, stdev=618.79 00:37:46.266 lat (usec): min=1862, max=7287, avg=4007.26, stdev=618.40 00:37:46.266 clat percentiles (usec): 00:37:46.266 | 1.00th=[ 2835], 5.00th=[ 3359], 10.00th=[ 3490], 20.00th=[ 3654], 00:37:46.267 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3818], 60.00th=[ 3884], 00:37:46.267 | 70.00th=[ 4080], 80.00th=[ 4178], 90.00th=[ 4621], 95.00th=[ 5604], 00:37:46.267 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6652], 99.95th=[ 6783], 00:37:46.267 | 99.99th=[ 7308] 00:37:46.267 bw ( KiB/s): min=15376, max=16752, per=24.03%, avg=15992.89, stdev=497.87, samples=9 00:37:46.267 iops : min= 1922, max= 2094, avg=1999.11, stdev=62.23, samples=9 00:37:46.267 lat (msec) : 2=0.01%, 4=65.85%, 10=34.14% 00:37:46.267 cpu : usr=96.88%, sys=2.84%, ctx=6, majf=0, minf=131 00:37:46.267 IO depths : 1=0.1%, 2=0.2%, 4=73.5%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.267 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.267 issued rwts: total=9950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.267 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:46.267 filename1: (groupid=0, jobs=1): err= 0: pid=1616899: Wed Nov 20 07:38:20 2024 00:37:46.267 read: IOPS=2194, BW=17.1MiB/s (18.0MB/s)(85.8MiB/5002msec) 00:37:46.267 slat (nsec): min=5475, max=39908, avg=7317.17, stdev=2889.05 00:37:46.267 clat (usec): min=1496, max=6610, avg=3625.45, stdev=632.20 00:37:46.267 lat (usec): min=1501, max=6618, avg=3632.76, stdev=632.30 00:37:46.267 clat percentiles (usec): 00:37:46.267 | 1.00th=[ 2507], 5.00th=[ 2802], 10.00th=[ 2900], 20.00th=[ 3097], 00:37:46.267 | 30.00th=[ 3228], 40.00th=[ 3458], 50.00th=[ 3589], 60.00th=[ 3785], 00:37:46.267 | 70.00th=[ 3818], 80.00th=[ 3982], 90.00th=[ 4359], 95.00th=[ 4948], 00:37:46.267 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 6063], 99.95th=[ 6390], 00:37:46.267 | 99.99th=[ 6587] 00:37:46.267 bw ( KiB/s): min=16096, max=18656, per=26.12%, avg=17385.00, stdev=818.24, samples=9 00:37:46.267 iops : min= 2012, max= 2332, avg=2173.11, stdev=102.29, samples=9 00:37:46.267 lat (msec) : 2=0.23%, 4=80.40%, 10=19.38% 00:37:46.267 cpu : usr=97.70%, sys=2.02%, ctx=7, majf=0, minf=94 00:37:46.267 IO depths : 1=0.1%, 2=3.5%, 4=66.5%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.267 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.267 issued rwts: total=10977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.267 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:46.267 00:37:46.267 Run status group 0 (all jobs): 00:37:46.267 READ: bw=65.0MiB/s (68.1MB/s), 15.5MiB/s-17.1MiB/s (16.3MB/s-18.0MB/s), io=325MiB (341MB), run=5002-5004msec 00:37:46.267 07:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:46.267 07:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:46.267 07:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:46.267 07:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:46.267 07:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:46.267 07:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:46.267 07:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.267 07:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.267 07:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.267 07:38:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:46.267 07:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.267 07:38:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.267 00:37:46.267 real 0m24.341s 00:37:46.267 user 5m13.018s 00:37:46.267 sys 0m4.622s 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:46.267 07:38:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.267 ************************************ 00:37:46.267 END TEST fio_dif_rand_params 00:37:46.267 ************************************ 00:37:46.528 07:38:21 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:46.528 07:38:21 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:46.528 07:38:21 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:46.528 07:38:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:46.528 ************************************ 00:37:46.528 START TEST fio_dif_digest 00:37:46.529 ************************************ 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:46.529 bdev_null0 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:46.529 [2024-11-20 07:38:21.134636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:46.529 { 00:37:46.529 "params": { 00:37:46.529 "name": "Nvme$subsystem", 00:37:46.529 "trtype": "$TEST_TRANSPORT", 00:37:46.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:46.529 "adrfam": "ipv4", 00:37:46.529 "trsvcid": "$NVMF_PORT", 00:37:46.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:46.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:46.529 "hdgst": ${hdgst:-false}, 00:37:46.529 "ddgst": ${ddgst:-false} 00:37:46.529 }, 00:37:46.529 "method": "bdev_nvme_attach_controller" 00:37:46.529 } 00:37:46.529 EOF 00:37:46.529 )") 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:46.529 "params": { 00:37:46.529 "name": "Nvme0", 00:37:46.529 "trtype": "tcp", 00:37:46.529 "traddr": "10.0.0.2", 00:37:46.529 "adrfam": "ipv4", 00:37:46.529 "trsvcid": "4420", 00:37:46.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:46.529 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:46.529 "hdgst": true, 00:37:46.529 "ddgst": true 00:37:46.529 }, 00:37:46.529 "method": "bdev_nvme_attach_controller" 00:37:46.529 }' 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:46.529 07:38:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.788 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:46.788 ... 00:37:46.788 fio-3.35 00:37:46.788 Starting 3 threads 00:37:59.023 00:37:59.023 filename0: (groupid=0, jobs=1): err= 0: pid=1618262: Wed Nov 20 07:38:32 2024 00:37:59.023 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(267MiB/10045msec) 00:37:59.023 slat (nsec): min=5926, max=32150, avg=8095.54, stdev=1492.25 00:37:59.023 clat (usec): min=8327, max=55117, avg=14071.44, stdev=2306.05 00:37:59.023 lat (usec): min=8334, max=55126, avg=14079.54, stdev=2306.16 00:37:59.023 clat percentiles (usec): 00:37:59.023 | 1.00th=[ 9372], 5.00th=[10945], 10.00th=[12387], 20.00th=[13173], 00:37:59.023 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14484], 00:37:59.023 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[16057], 00:37:59.023 | 99.00th=[17171], 99.50th=[17433], 99.90th=[53740], 99.95th=[53740], 00:37:59.023 | 99.99th=[55313] 00:37:59.023 bw ( KiB/s): min=24576, max=29696, per=32.53%, avg=27328.00, stdev=1063.25, samples=20 00:37:59.023 iops : min= 192, max= 232, avg=213.50, stdev= 8.31, samples=20 00:37:59.023 lat (msec) : 10=2.34%, 20=97.43%, 50=0.09%, 100=0.14% 00:37:59.023 cpu : usr=95.21%, sys=4.57%, ctx=21, majf=0, minf=85 00:37:59.023 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:59.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.023 issued rwts: total=2137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.023 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:59.023 filename0: (groupid=0, jobs=1): err= 0: pid=1618263: Wed Nov 20 07:38:32 2024 00:37:59.023 read: IOPS=241, BW=30.2MiB/s (31.6MB/s)(303MiB/10045msec) 00:37:59.023 slat (nsec): min=5861, max=31736, avg=6649.40, stdev=1029.01 00:37:59.023 clat (usec): min=7660, max=52380, avg=12400.74, stdev=1677.41 00:37:59.023 lat (usec): min=7667, max=52387, avg=12407.39, stdev=1677.41 00:37:59.023 clat percentiles (usec): 00:37:59.023 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[11600], 00:37:59.023 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:37:59.023 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[13960], 00:37:59.023 | 99.00th=[14877], 99.50th=[15139], 99.90th=[16057], 99.95th=[49021], 00:37:59.023 | 99.99th=[52167] 00:37:59.023 bw ( KiB/s): min=29696, max=32768, per=36.92%, avg=31014.40, stdev=710.98, samples=20 00:37:59.023 iops : min= 232, max= 256, avg=242.30, stdev= 5.55, samples=20 00:37:59.023 lat (msec) : 10=6.56%, 20=93.36%, 50=0.04%, 100=0.04% 00:37:59.023 cpu : usr=94.92%, sys=4.85%, ctx=30, majf=0, minf=122 00:37:59.023 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:59.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.023 issued rwts: total=2425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.023 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:59.023 filename0: (groupid=0, jobs=1): err= 0: pid=1618264: Wed Nov 20 07:38:32 2024 00:37:59.023 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(254MiB/10044msec) 00:37:59.023 slat (nsec): min=5914, max=32670, avg=7833.55, stdev=1629.09 00:37:59.023 clat (usec): min=8617, max=55618, avg=14805.26, stdev=5499.83 00:37:59.023 lat (usec): min=8626, max=55625, avg=14813.09, stdev=5499.79 00:37:59.023 clat percentiles (usec): 00:37:59.023 | 1.00th=[11600], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:37:59.023 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:37:59.023 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15533], 95.00th=[16057], 00:37:59.023 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55313], 99.95th=[55313], 00:37:59.023 | 99.99th=[55837] 00:37:59.023 bw ( KiB/s): min=22528, max=28160, per=30.91%, avg=25971.20, stdev=1552.70, samples=20 00:37:59.023 iops : min= 176, max= 220, avg=202.90, stdev=12.13, samples=20 00:37:59.023 lat (msec) : 10=0.10%, 20=98.03%, 50=0.05%, 100=1.82% 00:37:59.023 cpu : usr=95.12%, sys=4.60%, ctx=105, majf=0, minf=172 00:37:59.023 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:59.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.023 issued rwts: total=2031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.023 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:59.023 00:37:59.023 Run status group 0 (all jobs): 00:37:59.023 READ: bw=82.0MiB/s (86.0MB/s), 25.3MiB/s-30.2MiB/s (26.5MB/s-31.6MB/s), io=824MiB (864MB), run=10044-10045msec 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.023 00:37:59.023 real 0m11.256s 00:37:59.023 user 0m42.350s 00:37:59.023 sys 0m1.715s 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:59.023 07:38:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:59.023 ************************************ 00:37:59.023 END TEST fio_dif_digest 00:37:59.023 ************************************ 00:37:59.023 07:38:32 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:59.023 07:38:32 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:59.023 07:38:32 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:59.023 07:38:32 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:59.023 07:38:32 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:59.023 07:38:32 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:59.023 07:38:32 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:59.023 07:38:32 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:59.023 rmmod nvme_tcp 00:37:59.023 rmmod nvme_fabrics 00:37:59.023 rmmod nvme_keyring 00:37:59.023 07:38:32 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:59.023 07:38:32 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:59.023 07:38:32 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:59.023 07:38:32 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1607928 ']' 00:37:59.023 07:38:32 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1607928 00:37:59.023 07:38:32 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 1607928 ']' 00:37:59.023 07:38:32 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 1607928 00:37:59.023 07:38:32 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:37:59.023 07:38:32 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:59.023 07:38:32 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1607928 00:37:59.023 07:38:32 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:59.023 07:38:32 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:59.023 07:38:32 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1607928' 00:37:59.023 killing process with pid 1607928 00:37:59.023 07:38:32 nvmf_dif -- common/autotest_common.sh@971 -- # kill 1607928 00:37:59.023 07:38:32 nvmf_dif -- common/autotest_common.sh@976 -- # wait 1607928 00:37:59.023 07:38:32 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:59.023 07:38:32 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:01.563 Waiting for block devices as requested 00:38:01.563 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:01.563 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:01.563 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:01.563 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:01.563 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:01.822 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:01.822 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:01.822 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:02.082 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:02.082 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:02.342 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:02.342 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:02.342 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:02.342 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:02.602 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:02.602 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:02.602 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:02.862 07:38:37 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:02.862 07:38:37 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:02.862 07:38:37 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:02.862 07:38:37 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:38:02.862 07:38:37 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:02.862 07:38:37 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:38:02.862 07:38:37 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:02.862 07:38:37 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:02.862 07:38:37 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:02.862 07:38:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:02.862 07:38:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:05.409 07:38:39 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:05.409 00:38:05.409 real 1m19.187s 00:38:05.409 user 7m59.559s 00:38:05.409 sys 0m22.563s 00:38:05.409 07:38:39 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:05.409 07:38:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:05.409 ************************************ 00:38:05.409 END TEST nvmf_dif 00:38:05.409 ************************************ 00:38:05.409 07:38:39 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:05.409 07:38:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:05.409 07:38:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:05.409 07:38:39 -- common/autotest_common.sh@10 -- # set +x 00:38:05.409 ************************************ 00:38:05.409 START TEST nvmf_abort_qd_sizes 00:38:05.409 ************************************ 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:05.409 * Looking for test storage... 00:38:05.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:05.409 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:05.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.410 --rc genhtml_branch_coverage=1 00:38:05.410 --rc genhtml_function_coverage=1 00:38:05.410 --rc genhtml_legend=1 00:38:05.410 --rc geninfo_all_blocks=1 00:38:05.410 --rc geninfo_unexecuted_blocks=1 00:38:05.410 00:38:05.410 ' 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:05.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.410 --rc genhtml_branch_coverage=1 00:38:05.410 --rc genhtml_function_coverage=1 00:38:05.410 --rc genhtml_legend=1 00:38:05.410 --rc geninfo_all_blocks=1 00:38:05.410 --rc geninfo_unexecuted_blocks=1 00:38:05.410 00:38:05.410 ' 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:05.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.410 --rc genhtml_branch_coverage=1 00:38:05.410 --rc genhtml_function_coverage=1 00:38:05.410 --rc genhtml_legend=1 00:38:05.410 --rc geninfo_all_blocks=1 00:38:05.410 --rc geninfo_unexecuted_blocks=1 00:38:05.410 00:38:05.410 ' 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:05.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.410 --rc genhtml_branch_coverage=1 00:38:05.410 --rc genhtml_function_coverage=1 00:38:05.410 --rc genhtml_legend=1 00:38:05.410 --rc geninfo_all_blocks=1 00:38:05.410 --rc geninfo_unexecuted_blocks=1 00:38:05.410 00:38:05.410 ' 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:05.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:38:05.410 07:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:13.548 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:13.549 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:13.549 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:13.549 Found net devices under 0000:31:00.0: cvl_0_0 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:13.549 Found net devices under 0000:31:00.1: cvl_0_1 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:13.549 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:13.809 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:13.809 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:13.810 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:13.810 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:13.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:13.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:38:13.810 00:38:13.810 --- 10.0.0.2 ping statistics --- 00:38:13.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.810 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:38:13.810 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:13.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:13.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:38:13.810 00:38:13.810 --- 10.0.0.1 ping statistics --- 00:38:13.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.810 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:38:13.810 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:13.810 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:13.810 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:13.810 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:18.014 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:18.014 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:18.014 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:18.014 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:18.014 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:18.014 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:18.014 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:18.014 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:18.302 07:38:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:18.302 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:18.302 07:38:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:18.302 07:38:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:18.302 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1628504 00:38:18.302 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1628504 00:38:18.302 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:18.302 07:38:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 1628504 ']' 00:38:18.302 07:38:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.302 07:38:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:18.302 07:38:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.302 07:38:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:18.302 07:38:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:18.302 [2024-11-20 07:38:52.859612] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:38:18.302 [2024-11-20 07:38:52.859667] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:18.302 [2024-11-20 07:38:52.947671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:18.302 [2024-11-20 07:38:52.986453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:18.302 [2024-11-20 07:38:52.986489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:18.302 [2024-11-20 07:38:52.986498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:18.302 [2024-11-20 07:38:52.986505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:18.302 [2024-11-20 07:38:52.986511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:18.302 [2024-11-20 07:38:52.988234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:18.302 [2024-11-20 07:38:52.988499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.302 [2024-11-20 07:38:52.988500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:18.302 [2024-11-20 07:38:52.988334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:19.243 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:19.243 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:38:19.243 07:38:53 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:19.243 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:19.243 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:19.243 07:38:53 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:19.244 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:19.244 ************************************ 00:38:19.244 START TEST spdk_target_abort 00:38:19.244 ************************************ 00:38:19.244 07:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:38:19.244 07:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:19.244 07:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:38:19.244 07:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.244 07:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:19.504 spdk_targetn1 00:38:19.504 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.504 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:19.504 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:19.505 [2024-11-20 07:38:54.071895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:19.505 [2024-11-20 07:38:54.120201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:19.505 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:19.765 [2024-11-20 07:38:54.273334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:296 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:38:19.765 [2024-11-20 07:38:54.273362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0026 p:1 m:0 dnr:0 00:38:19.765 [2024-11-20 07:38:54.289870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:856 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:38:19.765 [2024-11-20 07:38:54.289887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:006c p:1 m:0 dnr:0 00:38:19.765 [2024-11-20 07:38:54.313318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1672 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:19.766 [2024-11-20 07:38:54.313334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00d3 p:1 m:0 dnr:0 00:38:19.766 [2024-11-20 07:38:54.337281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2544 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:19.766 [2024-11-20 07:38:54.337298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:19.766 [2024-11-20 07:38:54.345319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2832 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:19.766 [2024-11-20 07:38:54.345333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:19.766 [2024-11-20 07:38:54.373348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3968 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:38:19.766 [2024-11-20 07:38:54.373364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00f2 p:0 m:0 dnr:0 00:38:19.766 [2024-11-20 07:38:54.373432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3984 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:38:19.766 [2024-11-20 07:38:54.373442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00f3 p:0 m:0 dnr:0 00:38:19.766 [2024-11-20 07:38:54.373776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:4008 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:38:19.766 [2024-11-20 07:38:54.373786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00f6 p:0 m:0 dnr:0 00:38:19.766 [2024-11-20 07:38:54.396373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:4768 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:38:19.766 [2024-11-20 07:38:54.396388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0056 p:1 m:0 dnr:0 00:38:23.066 Initializing NVMe Controllers 00:38:23.067 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:23.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:23.067 Initialization complete. Launching workers. 00:38:23.067 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12405, failed: 9 00:38:23.067 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3172, failed to submit 9242 00:38:23.067 success 708, unsuccessful 2464, failed 0 00:38:23.067 07:38:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:23.067 07:38:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:23.067 [2024-11-20 07:38:57.526115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:2024 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:38:23.067 [2024-11-20 07:38:57.526148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:23.067 [2024-11-20 07:38:57.541991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:2392 len:8 PRP1 0x200004e54000 PRP2 0x0 00:38:23.067 [2024-11-20 07:38:57.542016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:23.067 [2024-11-20 07:38:57.566061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:2912 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:38:23.067 [2024-11-20 07:38:57.566084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:23.067 [2024-11-20 07:38:57.613988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:4016 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:38:23.067 [2024-11-20 07:38:57.614012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:23.067 [2024-11-20 07:38:57.653988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:4920 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:38:23.067 [2024-11-20 07:38:57.654010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:0071 p:1 m:0 dnr:0 00:38:26.367 Initializing NVMe Controllers 00:38:26.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:26.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:26.367 Initialization complete. Launching workers. 00:38:26.367 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8615, failed: 5 00:38:26.367 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1228, failed to submit 7392 00:38:26.367 success 338, unsuccessful 890, failed 0 00:38:26.367 07:39:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:26.367 07:39:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:29.668 Initializing NVMe Controllers 00:38:29.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:29.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:29.668 Initialization complete. Launching workers. 00:38:29.668 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41957, failed: 0 00:38:29.668 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2684, failed to submit 39273 00:38:29.668 success 619, unsuccessful 2065, failed 0 00:38:29.668 07:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:29.668 07:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.668 07:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:29.668 07:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.668 07:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:29.668 07:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.668 07:39:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1628504 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 1628504 ']' 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 1628504 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1628504 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1628504' 00:38:31.051 killing process with pid 1628504 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 1628504 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 1628504 00:38:31.051 00:38:31.051 real 0m12.030s 00:38:31.051 user 0m49.188s 00:38:31.051 sys 0m1.822s 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:31.051 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:31.051 ************************************ 00:38:31.051 END TEST spdk_target_abort 00:38:31.051 ************************************ 00:38:31.312 07:39:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:31.312 07:39:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:31.312 07:39:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:31.312 07:39:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:31.312 ************************************ 00:38:31.312 START TEST kernel_target_abort 00:38:31.312 ************************************ 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:31.312 07:39:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:35.513 Waiting for block devices as requested 00:38:35.513 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:35.513 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:35.513 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:35.513 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:35.513 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:35.513 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:35.513 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:35.513 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:35.513 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:35.774 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:35.774 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:35.774 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:36.035 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:36.035 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:36.035 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:36.035 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:36.295 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:36.555 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:36.555 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:36.555 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:36.555 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:38:36.555 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:36.555 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:36.555 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:36.555 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:36.556 No valid GPT data, bailing 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:36.556 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:38:36.816 00:38:36.816 Discovery Log Number of Records 2, Generation counter 2 00:38:36.816 =====Discovery Log Entry 0====== 00:38:36.816 trtype: tcp 00:38:36.816 adrfam: ipv4 00:38:36.816 subtype: current discovery subsystem 00:38:36.816 treq: not specified, sq flow control disable supported 00:38:36.816 portid: 1 00:38:36.816 trsvcid: 4420 00:38:36.816 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:36.816 traddr: 10.0.0.1 00:38:36.816 eflags: none 00:38:36.816 sectype: none 00:38:36.816 =====Discovery Log Entry 1====== 00:38:36.816 trtype: tcp 00:38:36.816 adrfam: ipv4 00:38:36.816 subtype: nvme subsystem 00:38:36.816 treq: not specified, sq flow control disable supported 00:38:36.816 portid: 1 00:38:36.816 trsvcid: 4420 00:38:36.816 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:36.816 traddr: 10.0.0.1 00:38:36.816 eflags: none 00:38:36.816 sectype: none 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:36.816 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:40.218 Initializing NVMe Controllers 00:38:40.218 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:40.218 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:40.218 Initialization complete. Launching workers. 00:38:40.218 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67003, failed: 0 00:38:40.218 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67003, failed to submit 0 00:38:40.219 success 0, unsuccessful 67003, failed 0 00:38:40.219 07:39:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:40.219 07:39:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:43.521 Initializing NVMe Controllers 00:38:43.521 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:43.521 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:43.521 Initialization complete. Launching workers. 00:38:43.521 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 108212, failed: 0 00:38:43.521 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27302, failed to submit 80910 00:38:43.521 success 0, unsuccessful 27302, failed 0 00:38:43.521 07:39:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:43.521 07:39:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:46.066 Initializing NVMe Controllers 00:38:46.066 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:46.066 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:46.066 Initialization complete. Launching workers. 00:38:46.066 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101866, failed: 0 00:38:46.066 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25482, failed to submit 76384 00:38:46.066 success 0, unsuccessful 25482, failed 0 00:38:46.066 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:46.066 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:46.066 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:46.066 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:46.066 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:46.066 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:46.066 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:46.066 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:46.066 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:46.066 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:50.272 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:50.272 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:52.182 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:52.182 00:38:52.182 real 0m21.046s 00:38:52.182 user 0m10.162s 00:38:52.182 sys 0m6.534s 00:38:52.182 07:39:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:52.182 07:39:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:52.182 ************************************ 00:38:52.182 END TEST kernel_target_abort 00:38:52.182 ************************************ 00:38:52.443 07:39:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:52.443 07:39:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:52.443 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:52.443 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:52.443 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:52.443 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:52.443 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:52.444 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:52.444 rmmod nvme_tcp 00:38:52.444 rmmod nvme_fabrics 00:38:52.444 rmmod nvme_keyring 00:38:52.444 07:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:52.444 07:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:52.444 07:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:52.444 07:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1628504 ']' 00:38:52.444 07:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1628504 00:38:52.444 07:39:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 1628504 ']' 00:38:52.444 07:39:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 1628504 00:38:52.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1628504) - No such process 00:38:52.444 07:39:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 1628504 is not found' 00:38:52.444 Process with pid 1628504 is not found 00:38:52.444 07:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:52.444 07:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:56.647 Waiting for block devices as requested 00:38:56.647 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:56.647 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:56.647 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:56.647 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:56.647 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:56.647 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:56.647 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:56.907 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:56.907 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:57.167 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:57.167 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:57.167 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:57.167 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:57.427 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:57.427 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:57.427 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:57.427 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:57.999 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:57.999 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:57.999 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:57.999 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:57.999 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:57.999 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:57.999 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:57.999 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:57.999 07:39:32 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:57.999 07:39:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:57.999 07:39:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.910 07:39:34 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:59.910 00:38:59.910 real 0m54.848s 00:38:59.910 user 1m5.278s 00:38:59.910 sys 0m20.913s 00:38:59.910 07:39:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:59.910 07:39:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:59.910 ************************************ 00:38:59.910 END TEST nvmf_abort_qd_sizes 00:38:59.910 ************************************ 00:38:59.911 07:39:34 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:59.911 07:39:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:59.911 07:39:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:59.911 07:39:34 -- common/autotest_common.sh@10 -- # set +x 00:38:59.911 ************************************ 00:38:59.911 START TEST keyring_file 00:38:59.911 ************************************ 00:38:59.911 07:39:34 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:00.172 * Looking for test storage... 00:39:00.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:00.172 07:39:34 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:00.172 07:39:34 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:39:00.172 07:39:34 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:00.172 07:39:34 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:00.172 07:39:34 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:00.172 07:39:34 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:00.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.172 --rc genhtml_branch_coverage=1 00:39:00.172 --rc genhtml_function_coverage=1 00:39:00.172 --rc genhtml_legend=1 00:39:00.172 --rc geninfo_all_blocks=1 00:39:00.172 --rc geninfo_unexecuted_blocks=1 00:39:00.172 00:39:00.172 ' 00:39:00.172 07:39:34 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:00.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.172 --rc genhtml_branch_coverage=1 00:39:00.172 --rc genhtml_function_coverage=1 00:39:00.172 --rc genhtml_legend=1 00:39:00.172 --rc geninfo_all_blocks=1 00:39:00.172 --rc geninfo_unexecuted_blocks=1 00:39:00.172 00:39:00.172 ' 00:39:00.172 07:39:34 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:00.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.172 --rc genhtml_branch_coverage=1 00:39:00.172 --rc genhtml_function_coverage=1 00:39:00.172 --rc genhtml_legend=1 00:39:00.172 --rc geninfo_all_blocks=1 00:39:00.172 --rc geninfo_unexecuted_blocks=1 00:39:00.172 00:39:00.172 ' 00:39:00.172 07:39:34 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:00.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.172 --rc genhtml_branch_coverage=1 00:39:00.172 --rc genhtml_function_coverage=1 00:39:00.172 --rc genhtml_legend=1 00:39:00.172 --rc geninfo_all_blocks=1 00:39:00.172 --rc geninfo_unexecuted_blocks=1 00:39:00.172 00:39:00.172 ' 00:39:00.172 07:39:34 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:00.172 07:39:34 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.172 07:39:34 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.172 07:39:34 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.172 07:39:34 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.172 07:39:34 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.172 07:39:34 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:00.172 07:39:34 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:00.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:00.172 07:39:34 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:00.172 07:39:34 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:00.172 07:39:34 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:00.172 07:39:34 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:00.172 07:39:34 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:00.172 07:39:34 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:00.172 07:39:34 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:00.172 07:39:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:00.172 07:39:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:00.172 07:39:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:00.172 07:39:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:00.172 07:39:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:00.172 07:39:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pE4kw4m5RR 00:39:00.172 07:39:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:00.172 07:39:34 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:00.173 07:39:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:00.173 07:39:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:00.173 07:39:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pE4kw4m5RR 00:39:00.173 07:39:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pE4kw4m5RR 00:39:00.173 07:39:34 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.pE4kw4m5RR 00:39:00.173 07:39:34 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:00.173 07:39:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:00.173 07:39:34 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:00.173 07:39:34 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:00.173 07:39:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:00.173 07:39:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:00.173 07:39:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Z8flxerJjI 00:39:00.173 07:39:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:00.173 07:39:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:00.173 07:39:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:00.173 07:39:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:00.433 07:39:34 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:00.433 07:39:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:00.433 07:39:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:00.433 07:39:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Z8flxerJjI 00:39:00.433 07:39:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Z8flxerJjI 00:39:00.433 07:39:34 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Z8flxerJjI 00:39:00.433 07:39:34 keyring_file -- keyring/file.sh@30 -- # tgtpid=1639913 00:39:00.433 07:39:34 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1639913 00:39:00.433 07:39:34 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:00.433 07:39:34 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1639913 ']' 00:39:00.433 07:39:34 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:00.433 07:39:34 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:00.433 07:39:34 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:00.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:00.433 07:39:34 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:00.433 07:39:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:00.433 [2024-11-20 07:39:35.038587] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:39:00.433 [2024-11-20 07:39:35.038639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639913 ] 00:39:00.433 [2024-11-20 07:39:35.115322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:00.433 [2024-11-20 07:39:35.152195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:39:01.375 07:39:35 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:01.375 [2024-11-20 07:39:35.817389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:01.375 null0 00:39:01.375 [2024-11-20 07:39:35.849431] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:01.375 [2024-11-20 07:39:35.849794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.375 07:39:35 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:01.375 [2024-11-20 07:39:35.881503] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:01.375 request: 00:39:01.375 { 00:39:01.375 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:01.375 "secure_channel": false, 00:39:01.375 "listen_address": { 00:39:01.375 "trtype": "tcp", 00:39:01.375 "traddr": "127.0.0.1", 00:39:01.375 "trsvcid": "4420" 00:39:01.375 }, 00:39:01.375 "method": "nvmf_subsystem_add_listener", 00:39:01.375 "req_id": 1 00:39:01.375 } 00:39:01.375 Got JSON-RPC error response 00:39:01.375 response: 00:39:01.375 { 00:39:01.375 "code": -32602, 00:39:01.375 "message": "Invalid parameters" 00:39:01.375 } 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:01.375 07:39:35 keyring_file -- keyring/file.sh@47 -- # bperfpid=1639990 00:39:01.375 07:39:35 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1639990 /var/tmp/bperf.sock 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1639990 ']' 00:39:01.375 07:39:35 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:01.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:01.375 07:39:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:01.375 [2024-11-20 07:39:35.941154] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:39:01.375 [2024-11-20 07:39:35.941204] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639990 ] 00:39:01.376 [2024-11-20 07:39:36.036271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.376 [2024-11-20 07:39:36.073075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.314 07:39:36 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:02.314 07:39:36 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:39:02.314 07:39:36 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pE4kw4m5RR 00:39:02.314 07:39:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pE4kw4m5RR 00:39:02.314 07:39:36 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Z8flxerJjI 00:39:02.314 07:39:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Z8flxerJjI 00:39:02.574 07:39:37 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:02.574 07:39:37 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:02.574 07:39:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.574 07:39:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.574 07:39:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:02.574 07:39:37 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.pE4kw4m5RR == \/\t\m\p\/\t\m\p\.\p\E\4\k\w\4\m\5\R\R ]] 00:39:02.574 07:39:37 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:02.574 07:39:37 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:02.574 07:39:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.574 07:39:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.574 07:39:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:02.834 07:39:37 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Z8flxerJjI == \/\t\m\p\/\t\m\p\.\Z\8\f\l\x\e\r\J\j\I ]] 00:39:02.834 07:39:37 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:02.834 07:39:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:02.834 07:39:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:02.834 07:39:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.834 07:39:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:02.834 07:39:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.095 07:39:37 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:03.095 07:39:37 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:03.095 07:39:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:03.095 07:39:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:03.095 07:39:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:03.095 07:39:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.095 07:39:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:03.095 07:39:37 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:03.095 07:39:37 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:03.095 07:39:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:03.356 [2024-11-20 07:39:37.937036] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:03.356 nvme0n1 00:39:03.356 07:39:38 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:03.356 07:39:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:03.356 07:39:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:03.356 07:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:03.356 07:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:03.356 07:39:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.616 07:39:38 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:03.616 07:39:38 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:03.616 07:39:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:03.616 07:39:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:03.616 07:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:03.616 07:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:03.616 07:39:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.877 07:39:38 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:03.877 07:39:38 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:03.877 Running I/O for 1 seconds... 00:39:04.815 16506.00 IOPS, 64.48 MiB/s 00:39:04.815 Latency(us) 00:39:04.815 [2024-11-20T06:39:39.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:04.815 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:04.815 nvme0n1 : 1.01 16514.65 64.51 0.00 0.00 7720.70 5898.24 18459.31 00:39:04.815 [2024-11-20T06:39:39.582Z] =================================================================================================================== 00:39:04.815 [2024-11-20T06:39:39.582Z] Total : 16514.65 64.51 0.00 0.00 7720.70 5898.24 18459.31 00:39:04.815 { 00:39:04.815 "results": [ 00:39:04.815 { 00:39:04.815 "job": "nvme0n1", 00:39:04.815 "core_mask": "0x2", 00:39:04.815 "workload": "randrw", 00:39:04.815 "percentage": 50, 00:39:04.815 "status": "finished", 00:39:04.815 "queue_depth": 128, 00:39:04.815 "io_size": 4096, 00:39:04.815 "runtime": 1.007348, 00:39:04.815 "iops": 16514.65034923383, 00:39:04.816 "mibps": 64.51035292669465, 00:39:04.816 "io_failed": 0, 00:39:04.816 "io_timeout": 0, 00:39:04.816 "avg_latency_us": 7720.695731345675, 00:39:04.816 "min_latency_us": 5898.24, 00:39:04.816 "max_latency_us": 18459.306666666667 00:39:04.816 } 00:39:04.816 ], 00:39:04.816 "core_count": 1 00:39:04.816 } 00:39:04.816 07:39:39 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:04.816 07:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:05.075 07:39:39 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:05.075 07:39:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:05.075 07:39:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:05.075 07:39:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.075 07:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.075 07:39:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:05.335 07:39:39 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:05.335 07:39:39 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:05.335 07:39:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:05.335 07:39:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:05.335 07:39:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.335 07:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.335 07:39:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:05.335 07:39:40 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:05.335 07:39:40 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:05.335 07:39:40 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:05.335 07:39:40 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:05.335 07:39:40 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:05.335 07:39:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:05.335 07:39:40 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:05.335 07:39:40 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:05.335 07:39:40 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:05.335 07:39:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:05.595 [2024-11-20 07:39:40.212606] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:05.595 [2024-11-20 07:39:40.213388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cc9d0 (107): Transport endpoint is not connected 00:39:05.595 [2024-11-20 07:39:40.214385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cc9d0 (9): Bad file descriptor 00:39:05.595 [2024-11-20 07:39:40.215388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:05.595 [2024-11-20 07:39:40.215403] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:05.595 [2024-11-20 07:39:40.215409] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:05.595 [2024-11-20 07:39:40.215415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:05.595 request: 00:39:05.595 { 00:39:05.595 "name": "nvme0", 00:39:05.595 "trtype": "tcp", 00:39:05.595 "traddr": "127.0.0.1", 00:39:05.595 "adrfam": "ipv4", 00:39:05.595 "trsvcid": "4420", 00:39:05.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:05.595 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:05.595 "prchk_reftag": false, 00:39:05.595 "prchk_guard": false, 00:39:05.595 "hdgst": false, 00:39:05.595 "ddgst": false, 00:39:05.595 "psk": "key1", 00:39:05.595 "allow_unrecognized_csi": false, 00:39:05.595 "method": "bdev_nvme_attach_controller", 00:39:05.595 "req_id": 1 00:39:05.595 } 00:39:05.595 Got JSON-RPC error response 00:39:05.595 response: 00:39:05.595 { 00:39:05.595 "code": -5, 00:39:05.595 "message": "Input/output error" 00:39:05.595 } 00:39:05.595 07:39:40 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:05.595 07:39:40 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:05.595 07:39:40 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:05.595 07:39:40 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:05.595 07:39:40 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:05.595 07:39:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:05.595 07:39:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:05.595 07:39:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.595 07:39:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.595 07:39:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:05.855 07:39:40 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:05.855 07:39:40 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:05.855 07:39:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:05.855 07:39:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:05.855 07:39:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.855 07:39:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:05.855 07:39:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.855 07:39:40 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:05.855 07:39:40 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:05.856 07:39:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:06.115 07:39:40 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:06.115 07:39:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:06.375 07:39:40 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:06.375 07:39:40 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:06.375 07:39:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:06.375 07:39:41 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:06.375 07:39:41 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.pE4kw4m5RR 00:39:06.375 07:39:41 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.pE4kw4m5RR 00:39:06.375 07:39:41 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:06.375 07:39:41 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.pE4kw4m5RR 00:39:06.375 07:39:41 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:06.375 07:39:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:06.375 07:39:41 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:06.375 07:39:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:06.375 07:39:41 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pE4kw4m5RR 00:39:06.375 07:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pE4kw4m5RR 00:39:06.635 [2024-11-20 07:39:41.261102] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pE4kw4m5RR': 0100660 00:39:06.635 [2024-11-20 07:39:41.261120] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:06.635 request: 00:39:06.635 { 00:39:06.635 "name": "key0", 00:39:06.635 "path": "/tmp/tmp.pE4kw4m5RR", 00:39:06.635 "method": "keyring_file_add_key", 00:39:06.635 "req_id": 1 00:39:06.635 } 00:39:06.635 Got JSON-RPC error response 00:39:06.635 response: 00:39:06.635 { 00:39:06.635 "code": -1, 00:39:06.635 "message": "Operation not permitted" 00:39:06.635 } 00:39:06.635 07:39:41 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:06.635 07:39:41 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:06.635 07:39:41 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:06.635 07:39:41 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:06.635 07:39:41 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.pE4kw4m5RR 00:39:06.635 07:39:41 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pE4kw4m5RR 00:39:06.635 07:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pE4kw4m5RR 00:39:06.896 07:39:41 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.pE4kw4m5RR 00:39:06.897 07:39:41 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:06.897 07:39:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:06.897 07:39:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:06.897 07:39:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:06.897 07:39:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:06.897 07:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:06.897 07:39:41 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:06.897 07:39:41 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.897 07:39:41 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:06.897 07:39:41 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.897 07:39:41 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:06.897 07:39:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:06.897 07:39:41 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:06.897 07:39:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:06.897 07:39:41 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.897 07:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:07.158 [2024-11-20 07:39:41.786430] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.pE4kw4m5RR': No such file or directory 00:39:07.158 [2024-11-20 07:39:41.786443] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:07.158 [2024-11-20 07:39:41.786457] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:07.158 [2024-11-20 07:39:41.786463] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:07.158 [2024-11-20 07:39:41.786469] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:07.158 [2024-11-20 07:39:41.786474] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:07.158 request: 00:39:07.158 { 00:39:07.158 "name": "nvme0", 00:39:07.158 "trtype": "tcp", 00:39:07.158 "traddr": "127.0.0.1", 00:39:07.158 "adrfam": "ipv4", 00:39:07.158 "trsvcid": "4420", 00:39:07.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:07.158 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:07.158 "prchk_reftag": false, 00:39:07.158 "prchk_guard": false, 00:39:07.158 "hdgst": false, 00:39:07.158 "ddgst": false, 00:39:07.158 "psk": "key0", 00:39:07.158 "allow_unrecognized_csi": false, 00:39:07.158 "method": "bdev_nvme_attach_controller", 00:39:07.158 "req_id": 1 00:39:07.158 } 00:39:07.158 Got JSON-RPC error response 00:39:07.158 response: 00:39:07.158 { 00:39:07.158 "code": -19, 00:39:07.158 "message": "No such device" 00:39:07.158 } 00:39:07.158 07:39:41 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:07.158 07:39:41 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:07.158 07:39:41 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:07.158 07:39:41 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:07.158 07:39:41 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:07.158 07:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:07.419 07:39:41 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:07.419 07:39:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:07.419 07:39:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:07.419 07:39:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:07.419 07:39:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:07.419 07:39:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:07.419 07:39:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TExlCRav08 00:39:07.419 07:39:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:07.419 07:39:41 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:07.419 07:39:41 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:07.419 07:39:41 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:07.419 07:39:41 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:07.419 07:39:41 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:07.419 07:39:41 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:07.419 07:39:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TExlCRav08 00:39:07.419 07:39:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TExlCRav08 00:39:07.419 07:39:42 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.TExlCRav08 00:39:07.419 07:39:42 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TExlCRav08 00:39:07.419 07:39:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TExlCRav08 00:39:07.679 07:39:42 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:07.679 07:39:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:07.940 nvme0n1 00:39:07.940 07:39:42 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:07.940 07:39:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:07.940 07:39:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:07.940 07:39:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.940 07:39:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:07.940 07:39:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.940 07:39:42 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:07.940 07:39:42 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:07.940 07:39:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:08.200 07:39:42 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:08.200 07:39:42 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:08.200 07:39:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:08.200 07:39:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:08.200 07:39:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:08.462 07:39:42 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:08.462 07:39:42 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:08.462 07:39:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:08.462 07:39:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:08.462 07:39:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:08.462 07:39:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:08.462 07:39:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:08.462 07:39:43 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:08.462 07:39:43 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:08.462 07:39:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:08.722 07:39:43 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:08.722 07:39:43 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:08.722 07:39:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:08.983 07:39:43 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:08.983 07:39:43 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TExlCRav08 00:39:08.983 07:39:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TExlCRav08 00:39:08.983 07:39:43 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Z8flxerJjI 00:39:08.983 07:39:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Z8flxerJjI 00:39:09.244 07:39:43 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:09.244 07:39:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:09.504 nvme0n1 00:39:09.504 07:39:44 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:09.504 07:39:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:09.765 07:39:44 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:09.765 "subsystems": [ 00:39:09.765 { 00:39:09.765 "subsystem": "keyring", 00:39:09.765 "config": [ 00:39:09.765 { 00:39:09.765 "method": "keyring_file_add_key", 00:39:09.765 "params": { 00:39:09.765 "name": "key0", 00:39:09.765 "path": "/tmp/tmp.TExlCRav08" 00:39:09.765 } 00:39:09.765 }, 00:39:09.765 { 00:39:09.765 "method": "keyring_file_add_key", 00:39:09.765 "params": { 00:39:09.765 "name": "key1", 00:39:09.765 "path": "/tmp/tmp.Z8flxerJjI" 00:39:09.765 } 00:39:09.765 } 00:39:09.765 ] 00:39:09.765 }, 00:39:09.765 { 00:39:09.765 "subsystem": "iobuf", 00:39:09.765 "config": [ 00:39:09.765 { 00:39:09.765 "method": "iobuf_set_options", 00:39:09.765 "params": { 00:39:09.765 "small_pool_count": 8192, 00:39:09.765 "large_pool_count": 1024, 00:39:09.765 "small_bufsize": 8192, 00:39:09.765 "large_bufsize": 135168, 00:39:09.765 "enable_numa": false 00:39:09.765 } 00:39:09.765 } 00:39:09.765 ] 00:39:09.765 }, 00:39:09.765 { 00:39:09.765 "subsystem": "sock", 00:39:09.765 "config": [ 00:39:09.765 { 00:39:09.765 "method": "sock_set_default_impl", 00:39:09.765 "params": { 00:39:09.765 "impl_name": "posix" 00:39:09.765 } 00:39:09.765 }, 00:39:09.765 { 00:39:09.765 "method": "sock_impl_set_options", 00:39:09.765 "params": { 00:39:09.765 "impl_name": "ssl", 00:39:09.765 "recv_buf_size": 4096, 00:39:09.765 "send_buf_size": 4096, 00:39:09.765 "enable_recv_pipe": true, 00:39:09.765 "enable_quickack": false, 00:39:09.765 "enable_placement_id": 0, 00:39:09.765 "enable_zerocopy_send_server": true, 00:39:09.765 "enable_zerocopy_send_client": false, 00:39:09.765 "zerocopy_threshold": 0, 00:39:09.765 "tls_version": 0, 00:39:09.765 "enable_ktls": false 00:39:09.765 } 00:39:09.765 }, 00:39:09.765 { 00:39:09.765 "method": "sock_impl_set_options", 00:39:09.765 "params": { 00:39:09.765 "impl_name": "posix", 00:39:09.765 "recv_buf_size": 2097152, 00:39:09.765 "send_buf_size": 2097152, 00:39:09.765 "enable_recv_pipe": true, 00:39:09.765 "enable_quickack": false, 00:39:09.765 "enable_placement_id": 0, 00:39:09.765 "enable_zerocopy_send_server": true, 00:39:09.765 "enable_zerocopy_send_client": false, 00:39:09.765 "zerocopy_threshold": 0, 00:39:09.765 "tls_version": 0, 00:39:09.765 "enable_ktls": false 00:39:09.765 } 00:39:09.765 } 00:39:09.765 ] 00:39:09.765 }, 00:39:09.765 { 00:39:09.765 "subsystem": "vmd", 00:39:09.765 "config": [] 00:39:09.765 }, 00:39:09.765 { 00:39:09.765 "subsystem": "accel", 00:39:09.765 "config": [ 00:39:09.765 { 00:39:09.765 "method": "accel_set_options", 00:39:09.765 "params": { 00:39:09.765 "small_cache_size": 128, 00:39:09.765 "large_cache_size": 16, 00:39:09.765 "task_count": 2048, 00:39:09.765 "sequence_count": 2048, 00:39:09.765 "buf_count": 2048 00:39:09.766 } 00:39:09.766 } 00:39:09.766 ] 00:39:09.766 }, 00:39:09.766 { 00:39:09.766 "subsystem": "bdev", 00:39:09.766 "config": [ 00:39:09.766 { 00:39:09.766 "method": "bdev_set_options", 00:39:09.766 "params": { 00:39:09.766 "bdev_io_pool_size": 65535, 00:39:09.766 "bdev_io_cache_size": 256, 00:39:09.766 "bdev_auto_examine": true, 00:39:09.766 "iobuf_small_cache_size": 128, 00:39:09.766 "iobuf_large_cache_size": 16 00:39:09.766 } 00:39:09.766 }, 00:39:09.766 { 00:39:09.766 "method": "bdev_raid_set_options", 00:39:09.766 "params": { 00:39:09.766 "process_window_size_kb": 1024, 00:39:09.766 "process_max_bandwidth_mb_sec": 0 00:39:09.766 } 00:39:09.766 }, 00:39:09.766 { 00:39:09.766 "method": "bdev_iscsi_set_options", 00:39:09.766 "params": { 00:39:09.766 "timeout_sec": 30 00:39:09.766 } 00:39:09.766 }, 00:39:09.766 { 00:39:09.766 "method": "bdev_nvme_set_options", 00:39:09.766 "params": { 00:39:09.766 "action_on_timeout": "none", 00:39:09.766 "timeout_us": 0, 00:39:09.766 "timeout_admin_us": 0, 00:39:09.766 "keep_alive_timeout_ms": 10000, 00:39:09.766 "arbitration_burst": 0, 00:39:09.766 "low_priority_weight": 0, 00:39:09.766 "medium_priority_weight": 0, 00:39:09.766 "high_priority_weight": 0, 00:39:09.766 "nvme_adminq_poll_period_us": 10000, 00:39:09.766 "nvme_ioq_poll_period_us": 0, 00:39:09.766 "io_queue_requests": 512, 00:39:09.766 "delay_cmd_submit": true, 00:39:09.766 "transport_retry_count": 4, 00:39:09.766 "bdev_retry_count": 3, 00:39:09.766 "transport_ack_timeout": 0, 00:39:09.766 "ctrlr_loss_timeout_sec": 0, 00:39:09.766 "reconnect_delay_sec": 0, 00:39:09.766 "fast_io_fail_timeout_sec": 0, 00:39:09.766 "disable_auto_failback": false, 00:39:09.766 "generate_uuids": false, 00:39:09.766 "transport_tos": 0, 00:39:09.766 "nvme_error_stat": false, 00:39:09.766 "rdma_srq_size": 0, 00:39:09.766 "io_path_stat": false, 00:39:09.766 "allow_accel_sequence": false, 00:39:09.766 "rdma_max_cq_size": 0, 00:39:09.766 "rdma_cm_event_timeout_ms": 0, 00:39:09.766 "dhchap_digests": [ 00:39:09.766 "sha256", 00:39:09.766 "sha384", 00:39:09.766 "sha512" 00:39:09.766 ], 00:39:09.766 "dhchap_dhgroups": [ 00:39:09.766 "null", 00:39:09.766 "ffdhe2048", 00:39:09.766 "ffdhe3072", 00:39:09.766 "ffdhe4096", 00:39:09.766 "ffdhe6144", 00:39:09.766 "ffdhe8192" 00:39:09.766 ] 00:39:09.766 } 00:39:09.766 }, 00:39:09.766 { 00:39:09.766 "method": "bdev_nvme_attach_controller", 00:39:09.766 "params": { 00:39:09.766 "name": "nvme0", 00:39:09.766 "trtype": "TCP", 00:39:09.766 "adrfam": "IPv4", 00:39:09.766 "traddr": "127.0.0.1", 00:39:09.766 "trsvcid": "4420", 00:39:09.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:09.766 "prchk_reftag": false, 00:39:09.766 "prchk_guard": false, 00:39:09.766 "ctrlr_loss_timeout_sec": 0, 00:39:09.766 "reconnect_delay_sec": 0, 00:39:09.766 "fast_io_fail_timeout_sec": 0, 00:39:09.766 "psk": "key0", 00:39:09.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:09.766 "hdgst": false, 00:39:09.766 "ddgst": false, 00:39:09.766 "multipath": "multipath" 00:39:09.766 } 00:39:09.766 }, 00:39:09.766 { 00:39:09.766 "method": "bdev_nvme_set_hotplug", 00:39:09.766 "params": { 00:39:09.766 "period_us": 100000, 00:39:09.766 "enable": false 00:39:09.766 } 00:39:09.766 }, 00:39:09.766 { 00:39:09.766 "method": "bdev_wait_for_examine" 00:39:09.766 } 00:39:09.766 ] 00:39:09.766 }, 00:39:09.766 { 00:39:09.766 "subsystem": "nbd", 00:39:09.766 "config": [] 00:39:09.766 } 00:39:09.766 ] 00:39:09.766 }' 00:39:09.766 07:39:44 keyring_file -- keyring/file.sh@115 -- # killprocess 1639990 00:39:09.766 07:39:44 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1639990 ']' 00:39:09.766 07:39:44 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1639990 00:39:09.766 07:39:44 keyring_file -- common/autotest_common.sh@957 -- # uname 00:39:09.766 07:39:44 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:09.766 07:39:44 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1639990 00:39:09.766 07:39:44 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:09.766 07:39:44 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:09.766 07:39:44 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1639990' 00:39:09.766 killing process with pid 1639990 00:39:09.766 07:39:44 keyring_file -- common/autotest_common.sh@971 -- # kill 1639990 00:39:09.766 Received shutdown signal, test time was about 1.000000 seconds 00:39:09.766 00:39:09.766 Latency(us) 00:39:09.766 [2024-11-20T06:39:44.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:09.766 [2024-11-20T06:39:44.533Z] =================================================================================================================== 00:39:09.766 [2024-11-20T06:39:44.533Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:09.766 07:39:44 keyring_file -- common/autotest_common.sh@976 -- # wait 1639990 00:39:10.027 07:39:44 keyring_file -- keyring/file.sh@118 -- # bperfpid=1641740 00:39:10.027 07:39:44 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1641740 /var/tmp/bperf.sock 00:39:10.027 07:39:44 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1641740 ']' 00:39:10.027 07:39:44 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:10.027 07:39:44 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:10.027 07:39:44 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:10.027 07:39:44 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:10.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:10.027 07:39:44 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:10.027 07:39:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:10.027 07:39:44 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:10.027 "subsystems": [ 00:39:10.027 { 00:39:10.027 "subsystem": "keyring", 00:39:10.027 "config": [ 00:39:10.027 { 00:39:10.027 "method": "keyring_file_add_key", 00:39:10.027 "params": { 00:39:10.027 "name": "key0", 00:39:10.027 "path": "/tmp/tmp.TExlCRav08" 00:39:10.027 } 00:39:10.027 }, 00:39:10.027 { 00:39:10.027 "method": "keyring_file_add_key", 00:39:10.027 "params": { 00:39:10.027 "name": "key1", 00:39:10.027 "path": "/tmp/tmp.Z8flxerJjI" 00:39:10.027 } 00:39:10.027 } 00:39:10.027 ] 00:39:10.027 }, 00:39:10.027 { 00:39:10.027 "subsystem": "iobuf", 00:39:10.027 "config": [ 00:39:10.027 { 00:39:10.027 "method": "iobuf_set_options", 00:39:10.027 "params": { 00:39:10.027 "small_pool_count": 8192, 00:39:10.027 "large_pool_count": 1024, 00:39:10.027 "small_bufsize": 8192, 00:39:10.027 "large_bufsize": 135168, 00:39:10.027 "enable_numa": false 00:39:10.027 } 00:39:10.027 } 00:39:10.027 ] 00:39:10.027 }, 00:39:10.027 { 00:39:10.027 "subsystem": "sock", 00:39:10.027 "config": [ 00:39:10.027 { 00:39:10.027 "method": "sock_set_default_impl", 00:39:10.027 "params": { 00:39:10.027 "impl_name": "posix" 00:39:10.027 } 00:39:10.027 }, 00:39:10.027 { 00:39:10.027 "method": "sock_impl_set_options", 00:39:10.027 "params": { 00:39:10.027 "impl_name": "ssl", 00:39:10.027 "recv_buf_size": 4096, 00:39:10.027 "send_buf_size": 4096, 00:39:10.027 "enable_recv_pipe": true, 00:39:10.027 "enable_quickack": false, 00:39:10.027 "enable_placement_id": 0, 00:39:10.027 "enable_zerocopy_send_server": true, 00:39:10.027 "enable_zerocopy_send_client": false, 00:39:10.027 "zerocopy_threshold": 0, 00:39:10.027 "tls_version": 0, 00:39:10.027 "enable_ktls": false 00:39:10.027 } 00:39:10.027 }, 00:39:10.027 { 00:39:10.027 "method": "sock_impl_set_options", 00:39:10.027 "params": { 00:39:10.027 "impl_name": "posix", 00:39:10.027 "recv_buf_size": 2097152, 00:39:10.028 "send_buf_size": 2097152, 00:39:10.028 "enable_recv_pipe": true, 00:39:10.028 "enable_quickack": false, 00:39:10.028 "enable_placement_id": 0, 00:39:10.028 "enable_zerocopy_send_server": true, 00:39:10.028 "enable_zerocopy_send_client": false, 00:39:10.028 "zerocopy_threshold": 0, 00:39:10.028 "tls_version": 0, 00:39:10.028 "enable_ktls": false 00:39:10.028 } 00:39:10.028 } 00:39:10.028 ] 00:39:10.028 }, 00:39:10.028 { 00:39:10.028 "subsystem": "vmd", 00:39:10.028 "config": [] 00:39:10.028 }, 00:39:10.028 { 00:39:10.028 "subsystem": "accel", 00:39:10.028 "config": [ 00:39:10.028 { 00:39:10.028 "method": "accel_set_options", 00:39:10.028 "params": { 00:39:10.028 "small_cache_size": 128, 00:39:10.028 "large_cache_size": 16, 00:39:10.028 "task_count": 2048, 00:39:10.028 "sequence_count": 2048, 00:39:10.028 "buf_count": 2048 00:39:10.028 } 00:39:10.028 } 00:39:10.028 ] 00:39:10.028 }, 00:39:10.028 { 00:39:10.028 "subsystem": "bdev", 00:39:10.028 "config": [ 00:39:10.028 { 00:39:10.028 "method": "bdev_set_options", 00:39:10.028 "params": { 00:39:10.028 "bdev_io_pool_size": 65535, 00:39:10.028 "bdev_io_cache_size": 256, 00:39:10.028 "bdev_auto_examine": true, 00:39:10.028 "iobuf_small_cache_size": 128, 00:39:10.028 "iobuf_large_cache_size": 16 00:39:10.028 } 00:39:10.028 }, 00:39:10.028 { 00:39:10.028 "method": "bdev_raid_set_options", 00:39:10.028 "params": { 00:39:10.028 "process_window_size_kb": 1024, 00:39:10.028 "process_max_bandwidth_mb_sec": 0 00:39:10.028 } 00:39:10.028 }, 00:39:10.028 { 00:39:10.028 "method": "bdev_iscsi_set_options", 00:39:10.028 "params": { 00:39:10.028 "timeout_sec": 30 00:39:10.028 } 00:39:10.028 }, 00:39:10.028 { 00:39:10.028 "method": "bdev_nvme_set_options", 00:39:10.028 "params": { 00:39:10.028 "action_on_timeout": "none", 00:39:10.028 "timeout_us": 0, 00:39:10.028 "timeout_admin_us": 0, 00:39:10.028 "keep_alive_timeout_ms": 10000, 00:39:10.028 "arbitration_burst": 0, 00:39:10.028 "low_priority_weight": 0, 00:39:10.028 "medium_priority_weight": 0, 00:39:10.028 "high_priority_weight": 0, 00:39:10.028 "nvme_adminq_poll_period_us": 10000, 00:39:10.028 "nvme_ioq_poll_period_us": 0, 00:39:10.028 "io_queue_requests": 512, 00:39:10.028 "delay_cmd_submit": true, 00:39:10.028 "transport_retry_count": 4, 00:39:10.028 "bdev_retry_count": 3, 00:39:10.028 "transport_ack_timeout": 0, 00:39:10.028 "ctrlr_loss_timeout_sec": 0, 00:39:10.028 "reconnect_delay_sec": 0, 00:39:10.028 "fast_io_fail_timeout_sec": 0, 00:39:10.028 "disable_auto_failback": false, 00:39:10.028 "generate_uuids": false, 00:39:10.028 "transport_tos": 0, 00:39:10.028 "nvme_error_stat": false, 00:39:10.028 "rdma_srq_size": 0, 00:39:10.028 "io_path_stat": false, 00:39:10.028 "allow_accel_sequence": false, 00:39:10.028 "rdma_max_cq_size": 0, 00:39:10.028 "rdma_cm_event_timeout_ms": 0, 00:39:10.028 "dhchap_digests": [ 00:39:10.028 "sha256", 00:39:10.028 "sha384", 00:39:10.028 "sha512" 00:39:10.028 ], 00:39:10.028 "dhchap_dhgroups": [ 00:39:10.028 "null", 00:39:10.028 "ffdhe2048", 00:39:10.028 "ffdhe3072", 00:39:10.028 "ffdhe4096", 00:39:10.028 "ffdhe6144", 00:39:10.028 "ffdhe8192" 00:39:10.028 ] 00:39:10.028 } 00:39:10.028 }, 00:39:10.028 { 00:39:10.028 "method": "bdev_nvme_attach_controller", 00:39:10.028 "params": { 00:39:10.028 "name": "nvme0", 00:39:10.028 "trtype": "TCP", 00:39:10.028 "adrfam": "IPv4", 00:39:10.028 "traddr": "127.0.0.1", 00:39:10.028 "trsvcid": "4420", 00:39:10.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:10.028 "prchk_reftag": false, 00:39:10.028 "prchk_guard": false, 00:39:10.028 "ctrlr_loss_timeout_sec": 0, 00:39:10.028 "reconnect_delay_sec": 0, 00:39:10.028 "fast_io_fail_timeout_sec": 0, 00:39:10.028 "psk": "key0", 00:39:10.028 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:10.028 "hdgst": false, 00:39:10.028 "ddgst": false, 00:39:10.028 "multipath": "multipath" 00:39:10.028 } 00:39:10.028 }, 00:39:10.028 { 00:39:10.028 "method": "bdev_nvme_set_hotplug", 00:39:10.028 "params": { 00:39:10.028 "period_us": 100000, 00:39:10.028 "enable": false 00:39:10.028 } 00:39:10.028 }, 00:39:10.028 { 00:39:10.028 "method": "bdev_wait_for_examine" 00:39:10.028 } 00:39:10.028 ] 00:39:10.028 }, 00:39:10.028 { 00:39:10.028 "subsystem": "nbd", 00:39:10.028 "config": [] 00:39:10.028 } 00:39:10.028 ] 00:39:10.028 }' 00:39:10.028 [2024-11-20 07:39:44.593274] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:39:10.028 [2024-11-20 07:39:44.593332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641740 ] 00:39:10.028 [2024-11-20 07:39:44.680301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.028 [2024-11-20 07:39:44.709389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:10.290 [2024-11-20 07:39:44.853710] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:10.863 07:39:45 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:10.863 07:39:45 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:39:10.863 07:39:45 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:10.863 07:39:45 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:10.863 07:39:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.863 07:39:45 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:10.863 07:39:45 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:10.863 07:39:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:10.863 07:39:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:10.863 07:39:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.863 07:39:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.863 07:39:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:11.124 07:39:45 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:11.124 07:39:45 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:11.124 07:39:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:11.124 07:39:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:11.124 07:39:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:11.124 07:39:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.124 07:39:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:11.384 07:39:45 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:11.384 07:39:45 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:11.384 07:39:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:11.384 07:39:45 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:11.384 07:39:46 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:11.384 07:39:46 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:11.384 07:39:46 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.TExlCRav08 /tmp/tmp.Z8flxerJjI 00:39:11.384 07:39:46 keyring_file -- keyring/file.sh@20 -- # killprocess 1641740 00:39:11.384 07:39:46 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1641740 ']' 00:39:11.384 07:39:46 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1641740 00:39:11.384 07:39:46 keyring_file -- common/autotest_common.sh@957 -- # uname 00:39:11.384 07:39:46 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:11.384 07:39:46 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1641740 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1641740' 00:39:11.645 killing process with pid 1641740 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@971 -- # kill 1641740 00:39:11.645 Received shutdown signal, test time was about 1.000000 seconds 00:39:11.645 00:39:11.645 Latency(us) 00:39:11.645 [2024-11-20T06:39:46.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.645 [2024-11-20T06:39:46.412Z] =================================================================================================================== 00:39:11.645 [2024-11-20T06:39:46.412Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@976 -- # wait 1641740 00:39:11.645 07:39:46 keyring_file -- keyring/file.sh@21 -- # killprocess 1639913 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1639913 ']' 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1639913 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@957 -- # uname 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1639913 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1639913' 00:39:11.645 killing process with pid 1639913 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@971 -- # kill 1639913 00:39:11.645 07:39:46 keyring_file -- common/autotest_common.sh@976 -- # wait 1639913 00:39:11.906 00:39:11.906 real 0m11.900s 00:39:11.906 user 0m28.704s 00:39:11.906 sys 0m2.592s 00:39:11.906 07:39:46 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:11.906 07:39:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:11.906 ************************************ 00:39:11.906 END TEST keyring_file 00:39:11.906 ************************************ 00:39:11.906 07:39:46 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:39:11.906 07:39:46 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:11.906 07:39:46 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:11.906 07:39:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:11.906 07:39:46 -- common/autotest_common.sh@10 -- # set +x 00:39:11.906 ************************************ 00:39:11.906 START TEST keyring_linux 00:39:11.906 ************************************ 00:39:11.906 07:39:46 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:11.906 Joined session keyring: 879259023 00:39:12.167 * Looking for test storage... 00:39:12.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:12.168 07:39:46 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:12.168 07:39:46 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:39:12.168 07:39:46 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:12.168 07:39:46 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:12.168 07:39:46 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:12.168 07:39:46 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:12.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.168 --rc genhtml_branch_coverage=1 00:39:12.168 --rc genhtml_function_coverage=1 00:39:12.168 --rc genhtml_legend=1 00:39:12.168 --rc geninfo_all_blocks=1 00:39:12.168 --rc geninfo_unexecuted_blocks=1 00:39:12.168 00:39:12.168 ' 00:39:12.168 07:39:46 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:12.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.168 --rc genhtml_branch_coverage=1 00:39:12.168 --rc genhtml_function_coverage=1 00:39:12.168 --rc genhtml_legend=1 00:39:12.168 --rc geninfo_all_blocks=1 00:39:12.168 --rc geninfo_unexecuted_blocks=1 00:39:12.168 00:39:12.168 ' 00:39:12.168 07:39:46 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:12.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.168 --rc genhtml_branch_coverage=1 00:39:12.168 --rc genhtml_function_coverage=1 00:39:12.168 --rc genhtml_legend=1 00:39:12.168 --rc geninfo_all_blocks=1 00:39:12.168 --rc geninfo_unexecuted_blocks=1 00:39:12.168 00:39:12.168 ' 00:39:12.168 07:39:46 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:12.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.168 --rc genhtml_branch_coverage=1 00:39:12.168 --rc genhtml_function_coverage=1 00:39:12.168 --rc genhtml_legend=1 00:39:12.168 --rc geninfo_all_blocks=1 00:39:12.168 --rc geninfo_unexecuted_blocks=1 00:39:12.168 00:39:12.168 ' 00:39:12.168 07:39:46 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:12.168 07:39:46 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.168 07:39:46 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.168 07:39:46 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.168 07:39:46 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.168 07:39:46 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.168 07:39:46 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:12.168 07:39:46 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:12.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:12.168 07:39:46 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:12.168 07:39:46 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:12.168 07:39:46 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:12.168 07:39:46 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:12.168 07:39:46 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:12.168 07:39:46 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:12.168 07:39:46 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:12.168 07:39:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:12.168 07:39:46 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:12.168 07:39:46 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:12.168 07:39:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:12.168 07:39:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:12.168 07:39:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:12.168 07:39:46 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:12.168 07:39:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:12.168 07:39:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:12.169 /tmp/:spdk-test:key0 00:39:12.169 07:39:46 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:12.169 07:39:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:12.169 07:39:46 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:12.169 07:39:46 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:12.169 07:39:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:12.169 07:39:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:12.169 07:39:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:12.169 07:39:46 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:12.169 07:39:46 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:12.169 07:39:46 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:12.169 07:39:46 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:12.169 07:39:46 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:12.169 07:39:46 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:12.430 07:39:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:12.430 07:39:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:12.430 /tmp/:spdk-test:key1 00:39:12.430 07:39:46 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1642303 00:39:12.430 07:39:46 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1642303 00:39:12.430 07:39:46 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:12.430 07:39:46 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 1642303 ']' 00:39:12.430 07:39:46 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:12.430 07:39:46 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:12.430 07:39:46 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:12.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:12.430 07:39:46 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:12.430 07:39:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:12.430 [2024-11-20 07:39:47.011811] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:39:12.430 [2024-11-20 07:39:47.011875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642303 ] 00:39:12.430 [2024-11-20 07:39:47.088832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.430 [2024-11-20 07:39:47.125065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.370 07:39:47 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:13.370 07:39:47 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:39:13.370 07:39:47 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:13.370 07:39:47 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.370 07:39:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:13.370 [2024-11-20 07:39:47.795498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:13.370 null0 00:39:13.370 [2024-11-20 07:39:47.827555] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:13.370 [2024-11-20 07:39:47.827958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:13.370 07:39:47 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.370 07:39:47 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:13.370 160580488 00:39:13.370 07:39:47 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:13.370 318630676 00:39:13.370 07:39:47 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1642512 00:39:13.370 07:39:47 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1642512 /var/tmp/bperf.sock 00:39:13.370 07:39:47 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:13.371 07:39:47 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 1642512 ']' 00:39:13.371 07:39:47 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:13.371 07:39:47 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:13.371 07:39:47 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:13.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:13.371 07:39:47 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:13.371 07:39:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:13.371 [2024-11-20 07:39:47.904794] Starting SPDK v25.01-pre git sha1 8ccf9ce7b / DPDK 24.03.0 initialization... 00:39:13.371 [2024-11-20 07:39:47.904843] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642512 ] 00:39:13.371 [2024-11-20 07:39:47.995333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.371 [2024-11-20 07:39:48.025398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:13.941 07:39:48 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:13.941 07:39:48 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:39:13.941 07:39:48 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:13.941 07:39:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:14.201 07:39:48 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:14.201 07:39:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:14.461 07:39:49 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:14.461 07:39:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:14.721 [2024-11-20 07:39:49.234989] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:14.721 nvme0n1 00:39:14.721 07:39:49 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:14.721 07:39:49 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:14.721 07:39:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:14.721 07:39:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:14.721 07:39:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:14.721 07:39:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.981 07:39:49 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:14.981 07:39:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:14.981 07:39:49 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:14.981 07:39:49 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:14.981 07:39:49 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:14.981 07:39:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.981 07:39:49 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:14.981 07:39:49 keyring_linux -- keyring/linux.sh@25 -- # sn=160580488 00:39:14.981 07:39:49 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:14.981 07:39:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:14.981 07:39:49 keyring_linux -- keyring/linux.sh@26 -- # [[ 160580488 == \1\6\0\5\8\0\4\8\8 ]] 00:39:14.981 07:39:49 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 160580488 00:39:14.981 07:39:49 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:14.981 07:39:49 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:15.242 Running I/O for 1 seconds... 00:39:16.183 5417.00 IOPS, 21.16 MiB/s 00:39:16.183 Latency(us) 00:39:16.183 [2024-11-20T06:39:50.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.183 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:16.183 nvme0n1 : 1.06 5240.72 20.47 0.00 0.00 24338.83 5270.19 87381.33 00:39:16.183 [2024-11-20T06:39:50.950Z] =================================================================================================================== 00:39:16.183 [2024-11-20T06:39:50.950Z] Total : 5240.72 20.47 0.00 0.00 24338.83 5270.19 87381.33 00:39:16.183 { 00:39:16.183 "results": [ 00:39:16.183 { 00:39:16.183 "job": "nvme0n1", 00:39:16.183 "core_mask": "0x2", 00:39:16.183 "workload": "randread", 00:39:16.183 "status": "finished", 00:39:16.183 "queue_depth": 128, 00:39:16.183 "io_size": 4096, 00:39:16.183 "runtime": 1.05806, 00:39:16.183 "iops": 5240.723588454341, 00:39:16.183 "mibps": 20.471576517399768, 00:39:16.183 "io_failed": 0, 00:39:16.183 "io_timeout": 0, 00:39:16.183 "avg_latency_us": 24338.825964532614, 00:39:16.183 "min_latency_us": 5270.1866666666665, 00:39:16.183 "max_latency_us": 87381.33333333333 00:39:16.183 } 00:39:16.183 ], 00:39:16.183 "core_count": 1 00:39:16.183 } 00:39:16.183 07:39:50 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:16.183 07:39:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:16.443 07:39:51 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:16.443 07:39:51 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:16.443 07:39:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:16.443 07:39:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:16.443 07:39:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:16.443 07:39:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:16.443 07:39:51 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:16.443 07:39:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:16.443 07:39:51 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:16.443 07:39:51 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:16.443 07:39:51 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:39:16.443 07:39:51 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:16.443 07:39:51 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:16.443 07:39:51 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:16.443 07:39:51 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:16.443 07:39:51 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:16.443 07:39:51 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:16.443 07:39:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:16.703 [2024-11-20 07:39:51.358040] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:16.703 [2024-11-20 07:39:51.358785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5b760 (107): Transport endpoint is not connected 00:39:16.704 [2024-11-20 07:39:51.359783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5b760 (9): Bad file descriptor 00:39:16.704 [2024-11-20 07:39:51.360784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:16.704 [2024-11-20 07:39:51.360791] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:16.704 [2024-11-20 07:39:51.360797] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:16.704 [2024-11-20 07:39:51.360803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:16.704 request: 00:39:16.704 { 00:39:16.704 "name": "nvme0", 00:39:16.704 "trtype": "tcp", 00:39:16.704 "traddr": "127.0.0.1", 00:39:16.704 "adrfam": "ipv4", 00:39:16.704 "trsvcid": "4420", 00:39:16.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:16.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:16.704 "prchk_reftag": false, 00:39:16.704 "prchk_guard": false, 00:39:16.704 "hdgst": false, 00:39:16.704 "ddgst": false, 00:39:16.704 "psk": ":spdk-test:key1", 00:39:16.704 "allow_unrecognized_csi": false, 00:39:16.704 "method": "bdev_nvme_attach_controller", 00:39:16.704 "req_id": 1 00:39:16.704 } 00:39:16.704 Got JSON-RPC error response 00:39:16.704 response: 00:39:16.704 { 00:39:16.704 "code": -5, 00:39:16.704 "message": "Input/output error" 00:39:16.704 } 00:39:16.704 07:39:51 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:39:16.704 07:39:51 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:16.704 07:39:51 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:16.704 07:39:51 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@33 -- # sn=160580488 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 160580488 00:39:16.704 1 links removed 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@33 -- # sn=318630676 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 318630676 00:39:16.704 1 links removed 00:39:16.704 07:39:51 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1642512 00:39:16.704 07:39:51 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 1642512 ']' 00:39:16.704 07:39:51 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 1642512 00:39:16.704 07:39:51 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:39:16.704 07:39:51 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:16.704 07:39:51 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1642512 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1642512' 00:39:16.965 killing process with pid 1642512 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@971 -- # kill 1642512 00:39:16.965 Received shutdown signal, test time was about 1.000000 seconds 00:39:16.965 00:39:16.965 Latency(us) 00:39:16.965 [2024-11-20T06:39:51.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.965 [2024-11-20T06:39:51.732Z] =================================================================================================================== 00:39:16.965 [2024-11-20T06:39:51.732Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@976 -- # wait 1642512 00:39:16.965 07:39:51 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1642303 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 1642303 ']' 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 1642303 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1642303 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1642303' 00:39:16.965 killing process with pid 1642303 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@971 -- # kill 1642303 00:39:16.965 07:39:51 keyring_linux -- common/autotest_common.sh@976 -- # wait 1642303 00:39:17.226 00:39:17.226 real 0m5.236s 00:39:17.226 user 0m10.090s 00:39:17.226 sys 0m1.167s 00:39:17.226 07:39:51 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:17.226 07:39:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:17.226 ************************************ 00:39:17.226 END TEST keyring_linux 00:39:17.226 ************************************ 00:39:17.226 07:39:51 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:39:17.226 07:39:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:17.226 07:39:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:17.226 07:39:51 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:39:17.226 07:39:51 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:39:17.226 07:39:51 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:39:17.226 07:39:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:17.226 07:39:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:17.226 07:39:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:17.226 07:39:51 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:39:17.226 07:39:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:17.226 07:39:51 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:39:17.226 07:39:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:17.226 07:39:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:17.226 07:39:51 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:39:17.226 07:39:51 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:39:17.226 07:39:51 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:39:17.226 07:39:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:17.226 07:39:51 -- common/autotest_common.sh@10 -- # set +x 00:39:17.226 07:39:51 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:39:17.226 07:39:51 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:39:17.226 07:39:51 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:39:17.226 07:39:51 -- common/autotest_common.sh@10 -- # set +x 00:39:25.370 INFO: APP EXITING 00:39:25.370 INFO: killing all VMs 00:39:25.370 INFO: killing vhost app 00:39:25.370 INFO: EXIT DONE 00:39:28.666 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:28.666 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:28.666 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:31.965 Cleaning 00:39:31.965 Removing: /var/run/dpdk/spdk0/config 00:39:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:31.965 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:31.965 Removing: /var/run/dpdk/spdk1/config 00:39:31.965 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:31.965 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:31.965 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:31.965 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:31.965 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:31.965 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:31.965 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:31.965 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:31.965 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:31.965 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:31.965 Removing: /var/run/dpdk/spdk2/config 00:39:31.965 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:31.965 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:31.965 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:31.965 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:31.965 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:31.965 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:31.965 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:31.965 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:31.965 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:31.965 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:31.965 Removing: /var/run/dpdk/spdk3/config 00:39:31.965 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:31.965 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:31.965 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:31.965 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:31.965 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:31.965 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:31.965 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:32.226 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:32.226 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:32.226 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:32.226 Removing: /var/run/dpdk/spdk4/config 00:39:32.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:32.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:32.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:32.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:32.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:32.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:32.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:32.226 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:32.226 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:32.226 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:32.226 Removing: /dev/shm/bdev_svc_trace.1 00:39:32.226 Removing: /dev/shm/nvmf_trace.0 00:39:32.226 Removing: /dev/shm/spdk_tgt_trace.pid1026637 00:39:32.226 Removing: /var/run/dpdk/spdk0 00:39:32.226 Removing: /var/run/dpdk/spdk1 00:39:32.226 Removing: /var/run/dpdk/spdk2 00:39:32.226 Removing: /var/run/dpdk/spdk3 00:39:32.226 Removing: /var/run/dpdk/spdk4 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1024934 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1026637 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1027196 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1028393 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1028576 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1029837 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1029970 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1030432 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1031511 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1032038 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1032438 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1032829 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1033251 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1033645 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1034005 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1034142 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1034431 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1035813 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1039079 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1039446 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1039712 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1039821 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1040247 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1040526 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1040906 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1041208 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1041440 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1041614 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1041903 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1041992 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1042487 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1042799 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1043194 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1048394 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1054252 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1067459 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1068148 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1073905 00:39:32.226 Removing: /var/run/dpdk/spdk_pid1074267 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1080010 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1087444 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1090559 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1104436 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1116802 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1118976 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1120121 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1142529 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1147970 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1208720 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1215809 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1223458 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1232176 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1232237 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1233246 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1234232 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1235291 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1236172 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1236191 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1236522 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1236538 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1236540 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1237546 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1238551 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1239615 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1240250 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1240396 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1240639 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1242014 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1243421 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1254091 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1290748 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1296667 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1298552 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1300689 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1300709 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1300985 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1301064 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1301766 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1303787 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1304910 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1305680 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1308596 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1309405 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1310261 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1315680 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1322729 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1322730 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1322731 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1328102 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1339392 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1344215 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1352111 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1353615 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1355456 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1357032 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1363755 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1369435 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1374820 00:39:32.488 Removing: /var/run/dpdk/spdk_pid1385185 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1385259 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1390846 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1391015 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1391343 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1391870 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1392012 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1398066 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1398642 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1404431 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1407787 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1414847 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1422637 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1433272 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1442720 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1442722 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1468104 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1468790 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1469476 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1470209 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1471324 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1472079 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1473300 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1474117 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1479824 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1480030 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1487712 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1487993 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1495120 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1500669 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1512643 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1513388 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1518976 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1519330 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1524964 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1532746 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1535758 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1548938 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1560651 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1562599 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1563691 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1585251 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1590461 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1593704 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1601430 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1601563 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1608220 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1610503 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1612858 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1614218 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1616537 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1617939 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1628809 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1629471 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1630154 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1633682 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1634130 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1634777 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1639913 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1639990 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1641740 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1642303 00:39:32.748 Removing: /var/run/dpdk/spdk_pid1642512 00:39:32.748 Clean 00:39:33.008 07:40:07 -- common/autotest_common.sh@1451 -- # return 0 00:39:33.008 07:40:07 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:39:33.008 07:40:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:33.008 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:39:33.008 07:40:07 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:39:33.008 07:40:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:33.008 07:40:07 -- common/autotest_common.sh@10 -- # set +x 00:39:33.008 07:40:07 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:33.008 07:40:07 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:33.009 07:40:07 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:33.009 07:40:07 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:39:33.009 07:40:07 -- spdk/autotest.sh@394 -- # hostname 00:39:33.009 07:40:07 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:33.269 geninfo: WARNING: invalid characters removed from testname! 00:39:59.923 07:40:33 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:01.836 07:40:36 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:03.746 07:40:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:05.127 07:40:39 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:07.671 07:40:41 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:09.054 07:40:43 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:10.964 07:40:45 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:10.964 07:40:45 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:10.964 07:40:45 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:10.964 07:40:45 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:10.964 07:40:45 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:10.964 07:40:45 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:10.964 + [[ -n 939162 ]] 00:40:10.964 + sudo kill 939162 00:40:10.973 [Pipeline] } 00:40:10.987 [Pipeline] // stage 00:40:10.991 [Pipeline] } 00:40:11.003 [Pipeline] // timeout 00:40:11.006 [Pipeline] } 00:40:11.016 [Pipeline] // catchError 00:40:11.020 [Pipeline] } 00:40:11.032 [Pipeline] // wrap 00:40:11.037 [Pipeline] } 00:40:11.050 [Pipeline] // catchError 00:40:11.061 [Pipeline] stage 00:40:11.065 [Pipeline] { (Epilogue) 00:40:11.078 [Pipeline] catchError 00:40:11.081 [Pipeline] { 00:40:11.093 [Pipeline] echo 00:40:11.095 Cleanup processes 00:40:11.100 [Pipeline] sh 00:40:11.387 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:11.387 1655891 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:11.402 [Pipeline] sh 00:40:11.691 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:11.691 ++ grep -v 'sudo pgrep' 00:40:11.691 ++ awk '{print $1}' 00:40:11.691 + sudo kill -9 00:40:11.691 + true 00:40:11.705 [Pipeline] sh 00:40:11.993 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:24.235 [Pipeline] sh 00:40:24.527 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:24.527 Artifacts sizes are good 00:40:24.543 [Pipeline] archiveArtifacts 00:40:24.551 Archiving artifacts 00:40:24.680 [Pipeline] sh 00:40:24.964 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:24.979 [Pipeline] cleanWs 00:40:24.990 [WS-CLEANUP] Deleting project workspace... 00:40:24.990 [WS-CLEANUP] Deferred wipeout is used... 00:40:24.997 [WS-CLEANUP] done 00:40:24.999 [Pipeline] } 00:40:25.017 [Pipeline] // catchError 00:40:25.031 [Pipeline] sh 00:40:25.407 + logger -p user.info -t JENKINS-CI 00:40:25.417 [Pipeline] } 00:40:25.431 [Pipeline] // stage 00:40:25.441 [Pipeline] } 00:40:25.455 [Pipeline] // node 00:40:25.460 [Pipeline] End of Pipeline 00:40:25.489 Finished: SUCCESS